Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
J Electrocardiol ; 87: 153792, 2024 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-39255653

RESUMO

INTRODUCTION: Deep learning (DL) models offer improved performance in electrocardiogram (ECG)-based classification over rule-based methods. However, for widespread adoption by clinicians, explainability methods, like saliency maps, are essential. METHODS: On a subset of 100 ECGs from patients with chest pain, we generated saliency maps using a previously validated convolutional neural network for occlusion myocardial infarction (OMI) classification. Three clinicians reviewed ECG-saliency map dyads, first assessing the likelihood of OMI from standard ECGs and then evaluating clinical relevance and helpfulness of the saliency maps, as well as their confidence in the model's predictions. Questions were answered on a Likert scale ranging from +3 (most useful/relevant) to -3 (least useful/relevant). RESULTS: The adjudicated accuracy of the three clinicians matched the DL model when considering area under the receiver operating characteristics curve (AUC) and F1 score (AUC 0.855 vs. 0.872, F1 score = 0.789 vs. 0.747). On average, clinicians found saliency maps slightly clinically relevant (0.96 ± 0.92) and slightly helpful (0.66 ± 0.98) in identifying or ruling out OMI but had higher confidence in the model's predictions (1.71 ± 0.56). Clinicians noted that leads I and aVL were often emphasized, even when obvious ST changes were present in other leads. CONCLUSION: In this clinical usability study, clinicians deemed saliency maps somewhat helpful in enhancing explainability of DL-based ECG models. The spatial convolutional layers across the 12 leads in these models appear to contribute to the discrepancy between ECG segments considered most relevant by clinicians and segments that drove DL model predictions.

2.
Sensors (Basel) ; 23(6)2023 Mar 16.
Artigo em Inglês | MEDLINE | ID: mdl-36991884

RESUMO

Terminal neurological conditions can affect millions of people worldwide and hinder them from doing their daily tasks and movements normally. Brain computer interface (BCI) is the best hope for many individuals with motor deficiencies. It will help many patients interact with the outside world and handle their daily tasks without assistance. Therefore, machine learning-based BCI systems have emerged as non-invasive techniques for reading out signals from the brain and interpreting them into commands to help those people to perform diverse limb motor tasks. This paper proposes an innovative and improved machine learning-based BCI system that analyzes EEG signals obtained from motor imagery to distinguish among various limb motor tasks based on BCI competition III dataset IVa. The proposed framework pipeline for EEG signal processing performs the following major steps. The first step uses a meta-heuristic optimization technique, called the whale optimization algorithm (WOA), to select the optimal features for discriminating between neural activity patterns. The pipeline then uses machine learning models such as LDA, k-NN, DT, RF, and LR to analyze the chosen features to enhance the precision of EEG signal analysis. The proposed BCI system, which merges the WOA as a feature selection method and the optimized k-NN classification model, demonstrated an overall accuracy of 98.6%, outperforming other machine learning models and previous techniques on the BCI competition III dataset IVa. Additionally, the EEG feature contribution in the ML classification model is reported using Explainable AI (XAI) tools, which provide insights into the individual contributions of the features in the predictions made by the model. By incorporating XAI techniques, the results of this study offer greater transparency and understanding of the relationship between the EEG features and the model's predictions. The proposed method shows potential levels for better use in controlling diverse limb motor tasks to help people with limb impairments and support them while enhancing their quality of life.


Assuntos
Interfaces Cérebro-Computador , Qualidade de Vida , Eletroencefalografia/métodos , Algoritmos , Aprendizado de Máquina
3.
J Environ Manage ; 342: 118149, 2023 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-37187074

RESUMO

Deep learning networks powered by AI are essential predictive tools relying on image data availability and processing hardware advancements. However, little attention has been paid to explainable AI (XAI) in application fields, including environmental management. This study develops an explainability framework with a triadic structure to focus on input, AI model and output. The framework provides three main contributions. (1) A context-based augmentation of input data to maximize generalizability and minimize overfitting. (2) A direct monitoring of AI model layers and parameters to use leaner (lighter) networks suitable for edge device deployment, (3) An output explanation procedure focusing on interpretability and robustness of predictive decisions by AI networks. These contributions significantly advance state of the art in XAI for environmental management research, offering implications for improved understanding and utilization of AI networks in this field.


Assuntos
Conservação dos Recursos Naturais , Aprendizado Profundo
4.
Entropy (Basel) ; 25(10)2023 Oct 09.
Artigo em Inglês | MEDLINE | ID: mdl-37895550

RESUMO

Recent advancements in artificial intelligence (AI) technology have raised concerns about the ethical, moral, and legal safeguards. There is a pressing need to improve metrics for assessing security and privacy of AI systems and to manage AI technology in a more ethical manner. To address these challenges, an AI Trust Framework and Maturity Model is proposed to enhance trust in the design and management of AI systems. Trust in AI involves an agreed-upon understanding between humans and machines about system performance. The framework utilizes an "entropy lens" to root the study in information theory and enhance transparency and trust in "black box" AI systems, which lack ethical guardrails. High entropy in AI systems can decrease human trust, particularly in uncertain and competitive environments. The research draws inspiration from entropy studies to improve trust and performance in autonomous human-machine teams and systems, including interconnected elements in hierarchical systems. Applying this lens to improve trust in AI also highlights new opportunities to optimize performance in teams. Two use cases are described to validate the AI framework's ability to measure trust in the design and management of AI systems.

5.
Sensors (Basel) ; 22(17)2022 Aug 29.
Artigo em Inglês | MEDLINE | ID: mdl-36080974

RESUMO

Class activation map (CAM) helps to formulate saliency maps that aid in interpreting the deep neural network's prediction. Gradient-based methods are generally faster than other branches of vision interpretability and independent of human guidance. The performance of CAM-like studies depends on the governing model's layer response and the influences of the gradients. Typical gradient-oriented CAM studies rely on weighted aggregation for saliency map estimation by projecting the gradient maps into single-weight values, which may lead to an over-generalized saliency map. To address this issue, we use a global guidance map to rectify the weighted aggregation operation during saliency estimation, where resultant interpretations are comparatively cleaner and instance-specific. We obtain the global guidance map by performing elementwise multiplication between the feature maps and their corresponding gradient maps. To validate our study, we compare the proposed study with nine different saliency visualizers. In addition, we use seven commonly used evaluation metrics for quantitative comparison. The proposed scheme achieves significant improvement over the test images from the ImageNet, MS-COCO 14, and PASCAL VOC 2012 datasets.

6.
Sensors (Basel) ; 22(18)2022 Sep 13.
Artigo em Inglês | MEDLINE | ID: mdl-36146271

RESUMO

Skin cancer is among the most prevalent and life-threatening forms of cancer that occur worldwide. Traditional methods of skin cancer detection need an in-depth physical examination by a medical professional, which is time-consuming in some cases. Recently, computer-aided medical diagnostic systems have gained popularity due to their effectiveness and efficiency. These systems can assist dermatologists in the early detection of skin cancer, which can be lifesaving. In this paper, the pre-trained MobileNetV2 and DenseNet201 deep learning models are modified by adding additional convolution layers to effectively detect skin cancer. Specifically, for both models, the modification includes stacking three convolutional layers at the end of both the models. A thorough comparison proves that the modified models show their superiority over the original pre-trained MobileNetV2 and DenseNet201 models. The proposed method can detect both benign and malignant classes. The results indicate that the proposed Modified DenseNet201 model achieves 95.50% accuracy and state-of-the-art performance when compared with other techniques present in the literature. In addition, the sensitivity and specificity of the Modified DenseNet201 model are 93.96% and 97.03%, respectively.


Assuntos
Aprendizado Profundo , Neoplasias Cutâneas , Humanos , Redes Neurais de Computação , Sensibilidade e Especificidade , Pele/patologia , Neoplasias Cutâneas/diagnóstico , Neoplasias Cutâneas/patologia
7.
Front Robot AI ; 11: 1375490, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39104806

RESUMO

Safefy-critical domains often employ autonomous agents which follow a sequential decision-making setup, whereby the agent follows a policy to dictate the appropriate action at each step. AI-practitioners often employ reinforcement learning algorithms to allow an agent to find the best policy. However, sequential systems often lack clear and immediate signs of wrong actions, with consequences visible only in hindsight, making it difficult to humans to understand system failure. In reinforcement learning, this is referred to as the credit assignment problem. To effectively collaborate with an autonomous system, particularly in a safety-critical setting, explanations should enable a user to better understand the policy of the agent and predict system behavior so that users are cognizant of potential failures and these failures can be diagnosed and mitigated. However, humans are diverse and have innate biases or preferences which may enhance or impair the utility of a policy explanation of a sequential agent. Therefore, in this paper, we designed and conducted human-subjects experiment to identify the factors which influence the perceived usability with the objective usefulness of policy explanations for reinforcement learning agents in a sequential setting. Our study had two factors: the modality of policy explanation shown to the user (Tree, Text, Modified Text, and Programs) and the "first impression" of the agent, i.e., whether the user saw the agent succeed or fail in the introductory calibration video. Our findings characterize a preference-performance tradeoff wherein participants perceived language-based policy explanations to be significantly more useable; however, participants were better able to objectively predict the agent's behavior when provided an explanation in the form of a decision tree. Our results demonstrate that user-specific factors, such as computer science experience (p < 0.05), and situational factors, such as watching agent crash (p < 0.05), can significantly impact the perception and usefulness of the explanation. This research provides key insights to alleviate prevalent issues regarding innapropriate compliance and reliance, which are exponentially more detrimental in safety-critical settings, providing a path forward for XAI developers for future work on policy-explanations.

8.
Stud Health Technol Inform ; 316: 736-740, 2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39176900

RESUMO

This study leverages data from a Canadian database of primary care Electronic Medical Records to develop machine learning models predicting type 2 diabetes mellitus (T2D), prediabetes, or normoglycemia. These models are used as a basis for extracting counterfactual explanations and derive personalized changes in biomarkers to prevent T2D onset, particularly in the still reversible prediabetic state. The models achieve satisfactory performance. Furthermore, feature importance analysis underscores the significance of fasting blood sugar and glycated hemoglobin, while counterfactuals explanations emphasize the centrality of keeping body mass index and cholesterol indicators within or close to the clinically desirable ranges. This research highlights the potential of machine learning and counterfactual explanations in guiding preventive interventions that may help slow down the progression from prediabetes to T2D on an individual basis, eventually fostering a recovery from prediabetes to a normoglycemic state.


Assuntos
Diabetes Mellitus Tipo 2 , Registros Eletrônicos de Saúde , Aprendizado de Máquina , Estado Pré-Diabético , Humanos , Canadá , Biomarcadores/sangue
9.
Stud Health Technol Inform ; 316: 846-850, 2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39176925

RESUMO

Text classification plays an essential role in the medical domain by organizing and categorizing vast amounts of textual data through machine learning (ML) and deep learning (DL). The adoption of Artificial Intelligence (AI) technologies in healthcare has raised concerns about the interpretability of AI models, often perceived as "black boxes." Explainable AI (XAI) techniques aim to mitigate this issue by elucidating AI model decision-making process. In this paper, we present a scoping review exploring the application of different XAI techniques in medical text classification, identifying two main types: model-specific and model-agnostic methods. Despite some positive feedback from developers, formal evaluations with medical end users of these techniques remain limited. The review highlights the necessity for further research in XAI to enhance trust and transparency in AI-driven decision-making processes in healthcare.


Assuntos
Inteligência Artificial , Processamento de Linguagem Natural , Humanos , Aprendizado de Máquina , Registros Eletrônicos de Saúde/classificação , Aprendizado Profundo
10.
Comput Biol Med ; 170: 107981, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38262204

RESUMO

A framework is developed for gene expression analysis by introducing fuzzy Jaccard similarity (FJS) and combining Lukasiewicz implication with it through weights in hybrid ensemble framework (WCLFJHEF) for gene selection in cancer. The method is called weighted combination of Lukasiewicz implication and fuzzy Jaccard similarity in hybrid ensemble framework (WCLFJHEF). While the fuzziness in Jaccard similarity is incorporated by using the existing Gödel fuzzy logic, the weights are obtained by maximizing the average F-score of selected genes in classifying the cancer patients. The patients are first divided into different clusters, based on the number of patient groups, using average linkage agglomerative clustering and a new score, called WCLFJ (weighted combination of Lukasiewicz implication and fuzzy Jaccard similarity). The genes are then selected from each cluster separately using filter based Relief-F and wrapper based SVMRFE (Support Vector Machine with Recursive Feature Elimination). A gene (feature) pool is created by considering the union of selected features for all the clusters. A set of informative genes is selected from the pool using sequential backward floating search (SBFS) algorithm. Patients are then classified using Naïve Bayes'(NB) and Support Vector Machine (SVM) separately, using the selected genes and the related F-scores are calculated. The weights in WCLFJ are then updated iteratively to maximize the average F-score obtained from the results of the classifier. The effectiveness of WCLFJHEF is demonstrated on six gene expression datasets. The average values of accuracy, F-score, recall, precision and MCC over all the datasets, are 95%, 94%, 94%, 94%, and 90%, respectively. The explainability of the selected genes is shown using SHapley Additive exPlanations (SHAP) values and this information is further used to rank them. The relevance of the selected gene set are biologically validated using the KEGG Pathway, Gene Ontology (GO), and existing literatures. It is seen that the genes that are selected by WCLFJHEF are candidates for genomic alterations in the various cancer types. The source code of WCLFJHEF is available at http://www.isical.ac.in/~shubhra/WCLFJHEF.html.


Assuntos
Perfilação da Expressão Gênica , Neoplasias , Humanos , Teorema de Bayes , Perfilação da Expressão Gênica/métodos , Algoritmos , Neoplasias/metabolismo , Software
11.
Diagnostics (Basel) ; 14(3)2024 Feb 05.
Artigo em Inglês | MEDLINE | ID: mdl-38337861

RESUMO

Alzheimer's disease (AD) is a progressive neurodegenerative disorder that affects millions of individuals worldwide, causing severe cognitive decline and memory impairment. The early and accurate diagnosis of AD is crucial for effective intervention and disease management. In recent years, deep learning techniques have shown promising results in medical image analysis, including AD diagnosis from neuroimaging data. However, the lack of interpretability in deep learning models hinders their adoption in clinical settings, where explainability is essential for gaining trust and acceptance from healthcare professionals. In this study, we propose an explainable AI (XAI)-based approach for the diagnosis of Alzheimer's disease, leveraging the power of deep transfer learning and ensemble modeling. The proposed framework aims to enhance the interpretability of deep learning models by incorporating XAI techniques, allowing clinicians to understand the decision-making process and providing valuable insights into disease diagnosis. By leveraging popular pre-trained convolutional neural networks (CNNs) such as VGG16, VGG19, DenseNet169, and DenseNet201, we conducted extensive experiments to evaluate their individual performances on a comprehensive dataset. The proposed ensembles, Ensemble-1 (VGG16 and VGG19) and Ensemble-2 (DenseNet169 and DenseNet201), demonstrated superior accuracy, precision, recall, and F1 scores compared to individual models, reaching up to 95%. In order to enhance interpretability and transparency in Alzheimer's diagnosis, we introduced a novel model achieving an impressive accuracy of 96%. This model incorporates explainable AI techniques, including saliency maps and grad-CAM (gradient-weighted class activation mapping). The integration of these techniques not only contributes to the model's exceptional accuracy but also provides clinicians and researchers with visual insights into the neural regions influencing the diagnosis. Our findings showcase the potential of combining deep transfer learning with explainable AI in the realm of Alzheimer's disease diagnosis, paving the way for more interpretable and clinically relevant AI models in healthcare.

12.
Comput Biol Med ; 176: 108525, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38749322

RESUMO

Deep neural networks have become increasingly popular for analyzing ECG data because of their ability to accurately identify cardiac conditions and hidden clinical factors. However, the lack of transparency due to the black box nature of these models is a common concern. To address this issue, explainable AI (XAI) methods can be employed. In this study, we present a comprehensive analysis of post-hoc XAI methods, investigating the glocal (aggregated local attributions over multiple samples) and global (concept based XAI) perspectives. We have established a set of sanity checks to identify saliency as the most sensible attribution method. We provide a dataset-wide analysis across entire patient subgroups, which goes beyond anecdotal evidence, to establish the first quantitative evidence for the alignment of model behavior with cardiologists' decision rules. Furthermore, we demonstrate how these XAI techniques can be utilized for knowledge discovery, such as identifying subtypes of myocardial infarction. We believe that these proposed methods can serve as building blocks for a complementary assessment of the internal validity during a certification process, as well as for knowledge discovery in the field of ECG analysis.


Assuntos
Aprendizado Profundo , Eletrocardiografia , Eletrocardiografia/métodos , Humanos , Descoberta do Conhecimento/métodos , Redes Neurais de Computação , Processamento de Sinais Assistido por Computador
13.
Heliyon ; 10(7): e28547, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38623197

RESUMO

This research project explored into the intricacies of road traffic accidents severity in the UK, employing a potent combination of machine learning algorithms, econometric techniques, and traditional statistical methods to analyse longitudinal historical data. Our robust analysis framework includes descriptive, inferential, bivariate, multivariate methodologies, correlation analysis: Pearson's and Spearman's Rank Correlation Coefficient, multiple logistic regression models, Multicollinearity Assessment, and Model Validation. In addressing heteroscedasticity or autocorrelation in error terms, we've advanced the precision and reliability of our regression analyses using the Generalized Method of Moments (GMM). Additionally, our application of the Vector Autoregressive (VAR) model and the Autoregressive Integrated Moving Average (ARIMA) models have enabled accurate time series forecasting. With this approach, we've achieved superior predictive accuracy and marked by a Mean Absolute Scaled Error (MASE) of 0.800 and a Mean Error (ME) of -73.80 compared to a naive forecast. The project further extends its machine learning application by creating a random forest classifier model with a precision of 73%, a recall of 78%, and an F1-score of 73%. Building on this, we employed the H2O AutoML process to optimize our model selection, resulting in an XGBoost model that exhibits exceptional predictive power as evidenced by an RMSE of 0.1761205782994506 and MAE of 0.0874235576229789. Factor Analysis was leveraged to identify underlying variables or factors that explain the pattern of correlations within a set of observed variables. Scoring history, a tool to observe the model's performance throughout the training process was incorporated to ensure the highest possible performance of our machine learning models. We also incorporated Explainable AI (XAI) techniques, utilizing the SHAP (Shapley Additive Explanations) model to comprehend the contributing factors to accident severity. Features such as Driver_Home_Area_Type, Longitude, Driver_IMD_Decile, Road_Type, Casualty_Home_Area_Type, and Casualty_IMD_Decile were identified as significant influencers. Our research contributes to the nuanced understanding of traffic accident severity and demonstrates the potential of advanced statistical, econometric, machine learning techniques in informing evidence based interventions and policies for enhancing road safety.

14.
J Imaging ; 10(2)2024 Feb 09.
Artigo em Inglês | MEDLINE | ID: mdl-38392094

RESUMO

In recent discussions in the European Parliament, the need for regulations for so-called high-risk artificial intelligence (AI) systems was identified, which are currently codified in the upcoming EU Artificial Intelligence Act (AIA) and approved by the European Parliament. The AIA is the first document to be turned into European Law. This initiative focuses on turning AI systems in decision support systems (human-in-the-loop and human-in-command), where the human operator remains in control of the system. While this supposedly solves accountability issues, it includes, on one hand, the necessary human-computer interaction as a potential new source of errors; on the other hand, it is potentially a very effective approach for decision interpretation and verification. This paper discusses the necessary requirements for high-risk AI systems once the AIA comes into force. Particular attention is paid to the opportunities and limitations that result from the decision support system and increasing the explainability of the system. This is illustrated using the example of the media forensic task of DeepFake detection.

15.
Comput Methods Programs Biomed ; 246: 108011, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38325024

RESUMO

BACKGROUND AND OBJECTIVE: Vaccination against SARS-CoV-2 in immunocompromised patients with hematologic malignancies (HM) is crucial to reduce the severity of COVID-19. Despite vaccination efforts, over a third of HM patients remain unresponsive, increasing their risk of severe breakthrough infections. This study aims to leverage machine learning's adaptability to COVID-19 dynamics, efficiently selecting patient-specific features to enhance predictions and improve healthcare strategies. Highlighting the complex COVID-hematology connection, the focus is on interpretable machine learning to provide valuable insights to clinicians and biologists. METHODS: The study evaluated a dataset with 1166 patients with hematological diseases. The output was the achievement or non-achievement of a serological response after full COVID-19 vaccination. Various machine learning methods were applied, with the best model selected based on metrics such as the Area Under the Curve (AUC), Sensitivity, Specificity, and Matthew Correlation Coefficient (MCC). Individual SHAP values were obtained for the best model, and Principal Component Analysis (PCA) was applied to these values. The patient profiles were then analyzed within identified clusters. RESULTS: Support vector machine (SVM) emerged as the best-performing model. PCA applied to SVM-derived SHAP values resulted in four perfectly separated clusters. These clusters are characterized by the proportion of patients that generate antibodies (PPGA). Cluster 1, with the second-highest PPGA (69.91%), included patients with aggressive diseases and factors contributing to increased immunodeficiency. Cluster 2 had the lowest PPGA (33.3%), but the small sample size limited conclusive findings. Cluster 3, representing the majority of the population, exhibited a high rate of antibody generation (84.39%) and a better prognosis compared to cluster 1. Cluster 4, with a PPGA of 66.33%, included patients with B-cell non-Hodgkin's lymphoma on corticosteroid therapy. CONCLUSIONS: The methodology successfully identified four separate patient clusters using Machine Learning and Explainable AI (XAI). We then analyzed each cluster based on the percentage of HM patients who generated antibodies after COVID-19 vaccination. The study suggests the methodology's potential applicability to other diseases, highlighting the importance of interpretable ML in healthcare research and decision-making.


Assuntos
COVID-19 , Doenças Hematológicas , Humanos , Vacinas contra COVID-19 , Área Sob a Curva , Aprendizado de Máquina
16.
J Imaging ; 9(5)2023 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-37233315

RESUMO

This paper deals with Generative Adversarial Networks (GANs) applied to face aging. An explainable face aging framework is proposed that builds on a well-known face aging approach, namely the Conditional Adversarial Autoencoder (CAAE). The proposed framework, namely, xAI-CAAE, couples CAAE with explainable Artificial Intelligence (xAI) methods, such as Saliency maps or Shapley additive explanations, to provide corrective feedback from the discriminator to the generator. xAI-guided training aims to supplement this feedback with explanations that provide a "reason" for the discriminator's decision. Moreover, Local Interpretable Model-agnostic Explanations (LIME) are leveraged to provide explanations for the face areas that most influence the decision of a pre-trained age classifier. To the best of our knowledge, xAI methods are utilized in the context of face aging for the first time. A thorough qualitative and quantitative evaluation demonstrates that the incorporation of the xAI systems contributed significantly to the generation of more realistic age-progressed and regressed images.

17.
F1000Res ; 12: 1060, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37928174

RESUMO

Background: The management of medical waste is a complex task that necessitates effective strategies to mitigate health risks, comply with regulations, and minimize environmental impact. In this study, a novel approach based on collaboration and technological advancements is proposed. Methods: By utilizing colored bags with identification tags, smart containers with sensors, object recognition sensors, air and soil control sensors, vehicles with Global Positioning System (GPS) and temperature humidity sensors, and outsourced waste treatment, the system optimizes waste sorting, storage, and treatment operations. Additionally, the incorporation of explainable artificial intelligence (XAI) technology, leveraging scikit-learn, xgboost, catboost, lightgbm, and skorch, provides real-time insights and data analytics, facilitating informed decision-making and process optimization. Results: The integration of these cutting-edge technologies forms the foundation of an efficient and intelligent medical waste management system. Furthermore, the article highlights the use of genetic algorithms (GA) to solve vehicle routing models, optimizing waste collection routes and minimizing transportation time to treatment centers. Conclusions: Overall, the combination of advanced technologies, optimization algorithms, and XAI contributes to improved waste management practices, ultimately benefiting both public health and the environment.


Assuntos
Resíduos de Serviços de Saúde , Gerenciamento de Resíduos , Inteligência Artificial , Meios de Transporte , Algoritmos
18.
Diagnostics (Basel) ; 13(18)2023 Sep 13.
Artigo em Inglês | MEDLINE | ID: mdl-37761306

RESUMO

Colon cancer is the third most common cancer type worldwide in 2020, almost two million cases were diagnosed. As a result, providing new, highly accurate techniques in detecting colon cancer leads to early and successful treatment of this disease. This paper aims to propose a heterogenic stacking deep learning model to predict colon cancer. Stacking deep learning is integrated with pretrained convolutional neural network (CNN) models with a metalearner to enhance colon cancer prediction performance. The proposed model is compared with VGG16, InceptionV3, Resnet50, and DenseNet121 using different evaluation metrics. Furthermore, the proposed models are evaluated using the LC25000 and WCE binary and muticlassified colon cancer image datasets. The results show that the stacking models recorded the highest performance for the two datasets. For the LC25000 dataset, the stacked model recorded the highest performance accuracy, recall, precision, and F1 score (100). For the WCE colon image dataset, the stacked model recorded the highest performance accuracy, recall, precision, and F1 score (98). Stacking-SVM achieved the highest performed compared to existing models (VGG16, InceptionV3, Resnet50, and DenseNet121) because it combines the output of multiple single models and trains and evaluates a metalearner using the output to produce better predictive results than any single model. Black-box deep learning models are represented using explainable AI (XAI).

19.
Heliyon ; 9(6): e16331, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37251488

RESUMO

A key unmet need in the management of hemophilia A (HA) is the lack of clinically validated markers that are associated with the development of neutralizing antibodies to Factor VIII (FVIII) (commonly referred to as inhibitors). This study aimed to identify relevant biomarkers for FVIII inhibition using Machine Learning (ML) and Explainable AI (XAI) using the My Life Our Future (MLOF) research repository. The dataset includes biologically relevant variables such as age, race, sex, ethnicity, and the variants in the F8 gene. In addition, we previously carried out Human Leukocyte Antigen Class II (HLA-II) typing on samples obtained from the MLOF repository. Using this information, we derived other patient-specific biologically and genetically important variables. These included identifying the number of foreign FVIII derived peptides, based on the alignment of the endogenous FVIII and infused drug sequences, and the foreign-peptide HLA-II molecule binding affinity calculated using NetMHCIIpan. The data were processed and trained with multiple ML classification models to identify the top performing models. The top performing model was then chosen to apply XAI via SHAP, (SHapley Additive exPlanations) to identify the variables critical for the prediction of FVIII inhibitor development in a hemophilia A patient. Using XAI we provide a robust and ranked identification of variables that could be predictive for developing inhibitors to FVIII drugs in hemophilia A patients. These variables could be validated as biomarkers and used in making clinical decisions and during drug development. The top five variables for predicting inhibitor development based on SHAP values are: (i) the baseline activity of the FVIII protein, (ii) mean affinity of all foreign peptides for HLA DRB 3, 4, & 5 alleles, (iii) mean affinity of all foreign peptides for HLA DRB1 alleles), (iv) the minimum affinity among all foreign peptides for HLA DRB1 alleles, and (v) F8 mutation type.

20.
Front Artif Intell ; 6: 1128212, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37168320

RESUMO

Recent years witnessed a number of proposals for the use of the so-called interpretable models in specific application domains. These include high-risk, but also safety-critical domains. In contrast, other works reported some pitfalls of machine learning model interpretability, in part justified by the lack of a rigorous definition of what an interpretable model should represent. This study proposes to relate interpretability with the ability of a model to offer explanations of why a prediction is made given some point in feature space. Under this general goal of offering explanations to predictions, this study reveals additional limitations of interpretable models. Concretely, this study considers application domains where the purpose is to help human decision makers to understand why some prediction was made or why was not some other prediction made, and where irreducible (and so minimal) information is sought. In such domains, this study argues that answers to such why (or why not) questions can exhibit arbitrary redundancy, i.e., the answers can be simplified, as long as these answers are obtained by human inspection of the interpretable ML model representation.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA