Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
1.
Front Robot AI ; 11: 1375490, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39104806

RESUMEN

Safefy-critical domains often employ autonomous agents which follow a sequential decision-making setup, whereby the agent follows a policy to dictate the appropriate action at each step. AI-practitioners often employ reinforcement learning algorithms to allow an agent to find the best policy. However, sequential systems often lack clear and immediate signs of wrong actions, with consequences visible only in hindsight, making it difficult to humans to understand system failure. In reinforcement learning, this is referred to as the credit assignment problem. To effectively collaborate with an autonomous system, particularly in a safety-critical setting, explanations should enable a user to better understand the policy of the agent and predict system behavior so that users are cognizant of potential failures and these failures can be diagnosed and mitigated. However, humans are diverse and have innate biases or preferences which may enhance or impair the utility of a policy explanation of a sequential agent. Therefore, in this paper, we designed and conducted human-subjects experiment to identify the factors which influence the perceived usability with the objective usefulness of policy explanations for reinforcement learning agents in a sequential setting. Our study had two factors: the modality of policy explanation shown to the user (Tree, Text, Modified Text, and Programs) and the "first impression" of the agent, i.e., whether the user saw the agent succeed or fail in the introductory calibration video. Our findings characterize a preference-performance tradeoff wherein participants perceived language-based policy explanations to be significantly more useable; however, participants were better able to objectively predict the agent's behavior when provided an explanation in the form of a decision tree. Our results demonstrate that user-specific factors, such as computer science experience (p < 0.05), and situational factors, such as watching agent crash (p < 0.05), can significantly impact the perception and usefulness of the explanation. This research provides key insights to alleviate prevalent issues regarding innapropriate compliance and reliance, which are exponentially more detrimental in safety-critical settings, providing a path forward for XAI developers for future work on policy-explanations.

2.
Stud Health Technol Inform ; 316: 736-740, 2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39176900

RESUMEN

This study leverages data from a Canadian database of primary care Electronic Medical Records to develop machine learning models predicting type 2 diabetes mellitus (T2D), prediabetes, or normoglycemia. These models are used as a basis for extracting counterfactual explanations and derive personalized changes in biomarkers to prevent T2D onset, particularly in the still reversible prediabetic state. The models achieve satisfactory performance. Furthermore, feature importance analysis underscores the significance of fasting blood sugar and glycated hemoglobin, while counterfactuals explanations emphasize the centrality of keeping body mass index and cholesterol indicators within or close to the clinically desirable ranges. This research highlights the potential of machine learning and counterfactual explanations in guiding preventive interventions that may help slow down the progression from prediabetes to T2D on an individual basis, eventually fostering a recovery from prediabetes to a normoglycemic state.


Asunto(s)
Diabetes Mellitus Tipo 2 , Registros Electrónicos de Salud , Aprendizaje Automático , Estado Prediabético , Humanos , Canadá , Biomarcadores/sangre
3.
Stud Health Technol Inform ; 316: 846-850, 2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39176925

RESUMEN

Text classification plays an essential role in the medical domain by organizing and categorizing vast amounts of textual data through machine learning (ML) and deep learning (DL). The adoption of Artificial Intelligence (AI) technologies in healthcare has raised concerns about the interpretability of AI models, often perceived as "black boxes." Explainable AI (XAI) techniques aim to mitigate this issue by elucidating AI model decision-making process. In this paper, we present a scoping review exploring the application of different XAI techniques in medical text classification, identifying two main types: model-specific and model-agnostic methods. Despite some positive feedback from developers, formal evaluations with medical end users of these techniques remain limited. The review highlights the necessity for further research in XAI to enhance trust and transparency in AI-driven decision-making processes in healthcare.


Asunto(s)
Inteligencia Artificial , Procesamiento de Lenguaje Natural , Humanos , Aprendizaje Automático , Registros Electrónicos de Salud/clasificación , Aprendizaje Profundo
4.
Comput Biol Med ; 176: 108525, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38749322

RESUMEN

Deep neural networks have become increasingly popular for analyzing ECG data because of their ability to accurately identify cardiac conditions and hidden clinical factors. However, the lack of transparency due to the black box nature of these models is a common concern. To address this issue, explainable AI (XAI) methods can be employed. In this study, we present a comprehensive analysis of post-hoc XAI methods, investigating the glocal (aggregated local attributions over multiple samples) and global (concept based XAI) perspectives. We have established a set of sanity checks to identify saliency as the most sensible attribution method. We provide a dataset-wide analysis across entire patient subgroups, which goes beyond anecdotal evidence, to establish the first quantitative evidence for the alignment of model behavior with cardiologists' decision rules. Furthermore, we demonstrate how these XAI techniques can be utilized for knowledge discovery, such as identifying subtypes of myocardial infarction. We believe that these proposed methods can serve as building blocks for a complementary assessment of the internal validity during a certification process, as well as for knowledge discovery in the field of ECG analysis.


Asunto(s)
Aprendizaje Profundo , Electrocardiografía , Electrocardiografía/métodos , Humanos , Descubrimiento del Conocimiento/métodos , Redes Neurales de la Computación , Procesamiento de Señales Asistido por Computador
5.
Heliyon ; 10(7): e28547, 2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38623197

RESUMEN

This research project explored into the intricacies of road traffic accidents severity in the UK, employing a potent combination of machine learning algorithms, econometric techniques, and traditional statistical methods to analyse longitudinal historical data. Our robust analysis framework includes descriptive, inferential, bivariate, multivariate methodologies, correlation analysis: Pearson's and Spearman's Rank Correlation Coefficient, multiple logistic regression models, Multicollinearity Assessment, and Model Validation. In addressing heteroscedasticity or autocorrelation in error terms, we've advanced the precision and reliability of our regression analyses using the Generalized Method of Moments (GMM). Additionally, our application of the Vector Autoregressive (VAR) model and the Autoregressive Integrated Moving Average (ARIMA) models have enabled accurate time series forecasting. With this approach, we've achieved superior predictive accuracy and marked by a Mean Absolute Scaled Error (MASE) of 0.800 and a Mean Error (ME) of -73.80 compared to a naive forecast. The project further extends its machine learning application by creating a random forest classifier model with a precision of 73%, a recall of 78%, and an F1-score of 73%. Building on this, we employed the H2O AutoML process to optimize our model selection, resulting in an XGBoost model that exhibits exceptional predictive power as evidenced by an RMSE of 0.1761205782994506 and MAE of 0.0874235576229789. Factor Analysis was leveraged to identify underlying variables or factors that explain the pattern of correlations within a set of observed variables. Scoring history, a tool to observe the model's performance throughout the training process was incorporated to ensure the highest possible performance of our machine learning models. We also incorporated Explainable AI (XAI) techniques, utilizing the SHAP (Shapley Additive Explanations) model to comprehend the contributing factors to accident severity. Features such as Driver_Home_Area_Type, Longitude, Driver_IMD_Decile, Road_Type, Casualty_Home_Area_Type, and Casualty_IMD_Decile were identified as significant influencers. Our research contributes to the nuanced understanding of traffic accident severity and demonstrates the potential of advanced statistical, econometric, machine learning techniques in informing evidence based interventions and policies for enhancing road safety.

6.
J Imaging ; 10(2)2024 Feb 09.
Artículo en Inglés | MEDLINE | ID: mdl-38392094

RESUMEN

In recent discussions in the European Parliament, the need for regulations for so-called high-risk artificial intelligence (AI) systems was identified, which are currently codified in the upcoming EU Artificial Intelligence Act (AIA) and approved by the European Parliament. The AIA is the first document to be turned into European Law. This initiative focuses on turning AI systems in decision support systems (human-in-the-loop and human-in-command), where the human operator remains in control of the system. While this supposedly solves accountability issues, it includes, on one hand, the necessary human-computer interaction as a potential new source of errors; on the other hand, it is potentially a very effective approach for decision interpretation and verification. This paper discusses the necessary requirements for high-risk AI systems once the AIA comes into force. Particular attention is paid to the opportunities and limitations that result from the decision support system and increasing the explainability of the system. This is illustrated using the example of the media forensic task of DeepFake detection.

7.
Comput Methods Programs Biomed ; 246: 108011, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38325024

RESUMEN

BACKGROUND AND OBJECTIVE: Vaccination against SARS-CoV-2 in immunocompromised patients with hematologic malignancies (HM) is crucial to reduce the severity of COVID-19. Despite vaccination efforts, over a third of HM patients remain unresponsive, increasing their risk of severe breakthrough infections. This study aims to leverage machine learning's adaptability to COVID-19 dynamics, efficiently selecting patient-specific features to enhance predictions and improve healthcare strategies. Highlighting the complex COVID-hematology connection, the focus is on interpretable machine learning to provide valuable insights to clinicians and biologists. METHODS: The study evaluated a dataset with 1166 patients with hematological diseases. The output was the achievement or non-achievement of a serological response after full COVID-19 vaccination. Various machine learning methods were applied, with the best model selected based on metrics such as the Area Under the Curve (AUC), Sensitivity, Specificity, and Matthew Correlation Coefficient (MCC). Individual SHAP values were obtained for the best model, and Principal Component Analysis (PCA) was applied to these values. The patient profiles were then analyzed within identified clusters. RESULTS: Support vector machine (SVM) emerged as the best-performing model. PCA applied to SVM-derived SHAP values resulted in four perfectly separated clusters. These clusters are characterized by the proportion of patients that generate antibodies (PPGA). Cluster 1, with the second-highest PPGA (69.91%), included patients with aggressive diseases and factors contributing to increased immunodeficiency. Cluster 2 had the lowest PPGA (33.3%), but the small sample size limited conclusive findings. Cluster 3, representing the majority of the population, exhibited a high rate of antibody generation (84.39%) and a better prognosis compared to cluster 1. Cluster 4, with a PPGA of 66.33%, included patients with B-cell non-Hodgkin's lymphoma on corticosteroid therapy. CONCLUSIONS: The methodology successfully identified four separate patient clusters using Machine Learning and Explainable AI (XAI). We then analyzed each cluster based on the percentage of HM patients who generated antibodies after COVID-19 vaccination. The study suggests the methodology's potential applicability to other diseases, highlighting the importance of interpretable ML in healthcare research and decision-making.


Asunto(s)
COVID-19 , Enfermedades Hematológicas , Humanos , Vacunas contra la COVID-19 , Área Bajo la Curva , Aprendizaje Automático
8.
Diagnostics (Basel) ; 14(3)2024 Feb 05.
Artículo en Inglés | MEDLINE | ID: mdl-38337861

RESUMEN

Alzheimer's disease (AD) is a progressive neurodegenerative disorder that affects millions of individuals worldwide, causing severe cognitive decline and memory impairment. The early and accurate diagnosis of AD is crucial for effective intervention and disease management. In recent years, deep learning techniques have shown promising results in medical image analysis, including AD diagnosis from neuroimaging data. However, the lack of interpretability in deep learning models hinders their adoption in clinical settings, where explainability is essential for gaining trust and acceptance from healthcare professionals. In this study, we propose an explainable AI (XAI)-based approach for the diagnosis of Alzheimer's disease, leveraging the power of deep transfer learning and ensemble modeling. The proposed framework aims to enhance the interpretability of deep learning models by incorporating XAI techniques, allowing clinicians to understand the decision-making process and providing valuable insights into disease diagnosis. By leveraging popular pre-trained convolutional neural networks (CNNs) such as VGG16, VGG19, DenseNet169, and DenseNet201, we conducted extensive experiments to evaluate their individual performances on a comprehensive dataset. The proposed ensembles, Ensemble-1 (VGG16 and VGG19) and Ensemble-2 (DenseNet169 and DenseNet201), demonstrated superior accuracy, precision, recall, and F1 scores compared to individual models, reaching up to 95%. In order to enhance interpretability and transparency in Alzheimer's diagnosis, we introduced a novel model achieving an impressive accuracy of 96%. This model incorporates explainable AI techniques, including saliency maps and grad-CAM (gradient-weighted class activation mapping). The integration of these techniques not only contributes to the model's exceptional accuracy but also provides clinicians and researchers with visual insights into the neural regions influencing the diagnosis. Our findings showcase the potential of combining deep transfer learning with explainable AI in the realm of Alzheimer's disease diagnosis, paving the way for more interpretable and clinically relevant AI models in healthcare.

9.
Comput Biol Med ; 170: 107981, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38262204

RESUMEN

A framework is developed for gene expression analysis by introducing fuzzy Jaccard similarity (FJS) and combining Lukasiewicz implication with it through weights in hybrid ensemble framework (WCLFJHEF) for gene selection in cancer. The method is called weighted combination of Lukasiewicz implication and fuzzy Jaccard similarity in hybrid ensemble framework (WCLFJHEF). While the fuzziness in Jaccard similarity is incorporated by using the existing Gödel fuzzy logic, the weights are obtained by maximizing the average F-score of selected genes in classifying the cancer patients. The patients are first divided into different clusters, based on the number of patient groups, using average linkage agglomerative clustering and a new score, called WCLFJ (weighted combination of Lukasiewicz implication and fuzzy Jaccard similarity). The genes are then selected from each cluster separately using filter based Relief-F and wrapper based SVMRFE (Support Vector Machine with Recursive Feature Elimination). A gene (feature) pool is created by considering the union of selected features for all the clusters. A set of informative genes is selected from the pool using sequential backward floating search (SBFS) algorithm. Patients are then classified using Naïve Bayes'(NB) and Support Vector Machine (SVM) separately, using the selected genes and the related F-scores are calculated. The weights in WCLFJ are then updated iteratively to maximize the average F-score obtained from the results of the classifier. The effectiveness of WCLFJHEF is demonstrated on six gene expression datasets. The average values of accuracy, F-score, recall, precision and MCC over all the datasets, are 95%, 94%, 94%, 94%, and 90%, respectively. The explainability of the selected genes is shown using SHapley Additive exPlanations (SHAP) values and this information is further used to rank them. The relevance of the selected gene set are biologically validated using the KEGG Pathway, Gene Ontology (GO), and existing literatures. It is seen that the genes that are selected by WCLFJHEF are candidates for genomic alterations in the various cancer types. The source code of WCLFJHEF is available at http://www.isical.ac.in/~shubhra/WCLFJHEF.html.


Asunto(s)
Perfilación de la Expresión Génica , Neoplasias , Humanos , Teorema de Bayes , Perfilación de la Expresión Génica/métodos , Algoritmos , Neoplasias/metabolismo , Programas Informáticos
10.
Front Psychiatry ; 14: 1219479, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38144474

RESUMEN

Advances in artificial intelligence (AI) in general and Natural Language Processing (NLP) in particular are paving the new way forward for the automated detection and prediction of mental health disorders among the population. Recent research in this area has prioritized predictive accuracy over model interpretability by relying on deep learning methods. However, prioritizing predictive accuracy over model interpretability can result in a lack of transparency in the decision-making process, which is critical in sensitive applications such as healthcare. There is thus a growing need for explainable AI (XAI) approaches to psychiatric diagnosis and prediction. The main aim of this work is to address a gap by conducting a systematic investigation of XAI approaches in the realm of automatic detection of mental disorders from language behavior leveraging textual data from social media. In pursuit of this aim, we perform extensive experiments to evaluate the balance between accuracy and interpretability across predictive mental health models. More specifically, we build BiLSTM models trained on a comprehensive set of human-interpretable features, encompassing syntactic complexity, lexical sophistication, readability, cohesion, stylistics, as well as topics and sentiment/emotions derived from lexicon-based dictionaries to capture multiple dimensions of language production. We conduct extensive feature ablation experiments to determine the most informative feature groups associated with specific mental health conditions. We juxtapose the performance of these models against a "black-box" domain-specific pretrained transformer adapted for mental health applications. To enhance the interpretability of the transformers models, we utilize a multi-task fusion learning framework infusing information from two relevant domains (emotion and personality traits). Moreover, we employ two distinct explanation techniques: the local interpretable model-agnostic explanations (LIME) method and a model-specific self-explaining method (AGRAD). These methods allow us to discern the specific categories of words that the information-infused models rely on when generating predictions. Our proposed approaches are evaluated on two public English benchmark datasets, subsuming five mental health conditions (attention-deficit/hyperactivity disorder, anxiety, bipolar disorder, depression and psychological stress).

11.
Front Artif Intell ; 6: 1272506, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38111787

RESUMEN

Introduction: The COVID-19 pandemic had a global impact and created an unprecedented emergency in healthcare and other related frontline sectors. Various Artificial-Intelligence-based models were developed to effectively manage medical resources and identify patients at high risk. However, many of these AI models were limited in their practical high-risk applicability due to their "black-box" nature, i.e., lack of interpretability of the model. To tackle this problem, Explainable Artificial Intelligence (XAI) was introduced, aiming to explore the "black box" behavior of machine learning models and offer definitive and interpretable evidence. XAI provides interpretable analysis in a human-compliant way, thus boosting our confidence in the successful implementation of AI systems in the wild. Methods: In this regard, this study explores the use of model-agnostic XAI models, such as SHapley Additive exPlanations values (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME), for COVID-19 symptom analysis in Indian patients toward a COVID severity prediction task. Various machine learning models such as Decision Tree Classifier, XGBoost Classifier, and Neural Network Classifier are leveraged to develop Machine Learning models. Results and discussion: The proposed XAI tools are found to augment the high performance of AI systems with human interpretable evidence and reasoning, as shown through the interpretation of various explainability plots. Our comparative analysis illustrates the significance of XAI tools and their impact within a healthcare context. The study suggests that SHAP and LIME analysis are promising methods for incorporating explainability in model development and can lead to better and more trustworthy ML models in the future.

12.
F1000Res ; 12: 1060, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37928174

RESUMEN

Background: The management of medical waste is a complex task that necessitates effective strategies to mitigate health risks, comply with regulations, and minimize environmental impact. In this study, a novel approach based on collaboration and technological advancements is proposed. Methods: By utilizing colored bags with identification tags, smart containers with sensors, object recognition sensors, air and soil control sensors, vehicles with Global Positioning System (GPS) and temperature humidity sensors, and outsourced waste treatment, the system optimizes waste sorting, storage, and treatment operations. Additionally, the incorporation of explainable artificial intelligence (XAI) technology, leveraging scikit-learn, xgboost, catboost, lightgbm, and skorch, provides real-time insights and data analytics, facilitating informed decision-making and process optimization. Results: The integration of these cutting-edge technologies forms the foundation of an efficient and intelligent medical waste management system. Furthermore, the article highlights the use of genetic algorithms (GA) to solve vehicle routing models, optimizing waste collection routes and minimizing transportation time to treatment centers. Conclusions: Overall, the combination of advanced technologies, optimization algorithms, and XAI contributes to improved waste management practices, ultimately benefiting both public health and the environment.


Asunto(s)
Residuos Sanitarios , Administración de Residuos , Inteligencia Artificial , Transportes , Algoritmos
13.
Entropy (Basel) ; 25(10)2023 Oct 09.
Artículo en Inglés | MEDLINE | ID: mdl-37895550

RESUMEN

Recent advancements in artificial intelligence (AI) technology have raised concerns about the ethical, moral, and legal safeguards. There is a pressing need to improve metrics for assessing security and privacy of AI systems and to manage AI technology in a more ethical manner. To address these challenges, an AI Trust Framework and Maturity Model is proposed to enhance trust in the design and management of AI systems. Trust in AI involves an agreed-upon understanding between humans and machines about system performance. The framework utilizes an "entropy lens" to root the study in information theory and enhance transparency and trust in "black box" AI systems, which lack ethical guardrails. High entropy in AI systems can decrease human trust, particularly in uncertain and competitive environments. The research draws inspiration from entropy studies to improve trust and performance in autonomous human-machine teams and systems, including interconnected elements in hierarchical systems. Applying this lens to improve trust in AI also highlights new opportunities to optimize performance in teams. Two use cases are described to validate the AI framework's ability to measure trust in the design and management of AI systems.

14.
Diagnostics (Basel) ; 13(18)2023 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-37761306

RESUMEN

Colon cancer is the third most common cancer type worldwide in 2020, almost two million cases were diagnosed. As a result, providing new, highly accurate techniques in detecting colon cancer leads to early and successful treatment of this disease. This paper aims to propose a heterogenic stacking deep learning model to predict colon cancer. Stacking deep learning is integrated with pretrained convolutional neural network (CNN) models with a metalearner to enhance colon cancer prediction performance. The proposed model is compared with VGG16, InceptionV3, Resnet50, and DenseNet121 using different evaluation metrics. Furthermore, the proposed models are evaluated using the LC25000 and WCE binary and muticlassified colon cancer image datasets. The results show that the stacking models recorded the highest performance for the two datasets. For the LC25000 dataset, the stacked model recorded the highest performance accuracy, recall, precision, and F1 score (100). For the WCE colon image dataset, the stacked model recorded the highest performance accuracy, recall, precision, and F1 score (98). Stacking-SVM achieved the highest performed compared to existing models (VGG16, InceptionV3, Resnet50, and DenseNet121) because it combines the output of multiple single models and trains and evaluates a metalearner using the output to produce better predictive results than any single model. Black-box deep learning models are represented using explainable AI (XAI).

15.
J Imaging ; 9(9)2023 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-37754941

RESUMEN

Recently, deep learning has gained significant attention as a noteworthy division of artificial intelligence (AI) due to its high accuracy and versatile applications. However, one of the major challenges of AI is the need for more interpretability, commonly referred to as the black-box problem. In this study, we introduce an explainable AI model for medical image classification to enhance the interpretability of the decision-making process. Our approach is based on segmenting the images to provide a better understanding of how the AI model arrives at its results. We evaluated our model on five datasets, including the COVID-19 and Pneumonia Chest X-ray dataset, Chest X-ray (COVID-19 and Pneumonia), COVID-19 Image Dataset (COVID-19, Viral Pneumonia, Normal), and COVID-19 Radiography Database. We achieved testing and validation accuracy of 90.6% on a relatively small dataset of 6432 images. Our proposed model improved accuracy and reduced time complexity, making it more practical for medical diagnosis. Our approach offers a more interpretable and transparent AI model that can enhance the accuracy and efficiency of medical diagnosis.

16.
Artif Intell Med ; 143: 102545, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37673554

RESUMEN

Current models on Explainable Artificial Intelligence (XAI) have shown a lack of reliability when evaluating feature-relevance for deep neural biomarker classifiers. The inclusion of reliable saliency-maps for obtaining trustworthy and interpretable neural activity is still insufficiently mature for practical applications. These limitations impede the development of clinical applications of Deep Learning. To address, these limitations we propose the RemOve-And-Retrain (ROAR) algorithm which supports the recovery of highly relevant features from any pre-trained deep neural network. In this study we evaluated the ROAR methodology and algorithm for the Face Emotion Recognition (FER) task, which is clinically applicable in the study of Autism Spectrum Disorder (ASD). We trained a Convolutional Neural Network (CNN) from electroencephalography (EEG) signals and assessed the relevance of FER-elicited EEG features from individuals diagnosed with and without ASD. Specifically, we compared the ROAR reliability from well-known relevance maps such as Layer-Wise Relevance Propagation, PatternNet, Pattern-Attribution, and Smooth-Grad Squared. This study is the first to bridge previous neuroscience and ASD research findings to feature-relevance calculation for EEG-based emotion recognition with CNN in typically-development (TD) and in ASD individuals.


Asunto(s)
Trastorno del Espectro Autista , Trastorno Autístico , Aprendizaje Profundo , Humanos , Trastorno Autístico/diagnóstico , Trastorno del Espectro Autista/diagnóstico , Inteligencia Artificial , Reproducibilidad de los Resultados , Algoritmos , Emociones , Electroencefalografía
17.
J Environ Manage ; 342: 118149, 2023 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-37187074

RESUMEN

Deep learning networks powered by AI are essential predictive tools relying on image data availability and processing hardware advancements. However, little attention has been paid to explainable AI (XAI) in application fields, including environmental management. This study develops an explainability framework with a triadic structure to focus on input, AI model and output. The framework provides three main contributions. (1) A context-based augmentation of input data to maximize generalizability and minimize overfitting. (2) A direct monitoring of AI model layers and parameters to use leaner (lighter) networks suitable for edge device deployment, (3) An output explanation procedure focusing on interpretability and robustness of predictive decisions by AI networks. These contributions significantly advance state of the art in XAI for environmental management research, offering implications for improved understanding and utilization of AI networks in this field.


Asunto(s)
Conservación de los Recursos Naturales , Aprendizaje Profundo
18.
Front Artif Intell ; 6: 1128212, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37168320

RESUMEN

Recent years witnessed a number of proposals for the use of the so-called interpretable models in specific application domains. These include high-risk, but also safety-critical domains. In contrast, other works reported some pitfalls of machine learning model interpretability, in part justified by the lack of a rigorous definition of what an interpretable model should represent. This study proposes to relate interpretability with the ability of a model to offer explanations of why a prediction is made given some point in feature space. Under this general goal of offering explanations to predictions, this study reveals additional limitations of interpretable models. Concretely, this study considers application domains where the purpose is to help human decision makers to understand why some prediction was made or why was not some other prediction made, and where irreducible (and so minimal) information is sought. In such domains, this study argues that answers to such why (or why not) questions can exhibit arbitrary redundancy, i.e., the answers can be simplified, as long as these answers are obtained by human inspection of the interpretable ML model representation.

19.
J Imaging ; 9(5)2023 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-37233315

RESUMEN

This paper deals with Generative Adversarial Networks (GANs) applied to face aging. An explainable face aging framework is proposed that builds on a well-known face aging approach, namely the Conditional Adversarial Autoencoder (CAAE). The proposed framework, namely, xAI-CAAE, couples CAAE with explainable Artificial Intelligence (xAI) methods, such as Saliency maps or Shapley additive explanations, to provide corrective feedback from the discriminator to the generator. xAI-guided training aims to supplement this feedback with explanations that provide a "reason" for the discriminator's decision. Moreover, Local Interpretable Model-agnostic Explanations (LIME) are leveraged to provide explanations for the face areas that most influence the decision of a pre-trained age classifier. To the best of our knowledge, xAI methods are utilized in the context of face aging for the first time. A thorough qualitative and quantitative evaluation demonstrates that the incorporation of the xAI systems contributed significantly to the generation of more realistic age-progressed and regressed images.

20.
Heliyon ; 9(6): e16331, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37251488

RESUMEN

A key unmet need in the management of hemophilia A (HA) is the lack of clinically validated markers that are associated with the development of neutralizing antibodies to Factor VIII (FVIII) (commonly referred to as inhibitors). This study aimed to identify relevant biomarkers for FVIII inhibition using Machine Learning (ML) and Explainable AI (XAI) using the My Life Our Future (MLOF) research repository. The dataset includes biologically relevant variables such as age, race, sex, ethnicity, and the variants in the F8 gene. In addition, we previously carried out Human Leukocyte Antigen Class II (HLA-II) typing on samples obtained from the MLOF repository. Using this information, we derived other patient-specific biologically and genetically important variables. These included identifying the number of foreign FVIII derived peptides, based on the alignment of the endogenous FVIII and infused drug sequences, and the foreign-peptide HLA-II molecule binding affinity calculated using NetMHCIIpan. The data were processed and trained with multiple ML classification models to identify the top performing models. The top performing model was then chosen to apply XAI via SHAP, (SHapley Additive exPlanations) to identify the variables critical for the prediction of FVIII inhibitor development in a hemophilia A patient. Using XAI we provide a robust and ranked identification of variables that could be predictive for developing inhibitors to FVIII drugs in hemophilia A patients. These variables could be validated as biomarkers and used in making clinical decisions and during drug development. The top five variables for predicting inhibitor development based on SHAP values are: (i) the baseline activity of the FVIII protein, (ii) mean affinity of all foreign peptides for HLA DRB 3, 4, & 5 alleles, (iii) mean affinity of all foreign peptides for HLA DRB1 alleles), (iv) the minimum affinity among all foreign peptides for HLA DRB1 alleles, and (v) F8 mutation type.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA