Characterizing the Contribution of Dependent Features in XAI Methods.
IEEE J Biomed Health Inform
; PP2024 May 02.
Article
em En
| MEDLINE
| ID: mdl-38696291
ABSTRACT
Explainable Artificial Intelligence (XAI) provides tools to help understanding how AI models work and reach a particular decision or outcome. It helps to increase the interpretability of models and makes them more trustworthy and transparent. In this context, many XAI methods have been proposed to make black-box and complex models more digestible from a human perspective. However, one of the main issues that XAI methods have to face especially when dealing with a high number of features is the presence of multicollinearity, which casts shadows on the robustness of the XAI outcomes, such as the ranking of informative features. Most of the current XAI methods either do not consider the collinearity or assume the features are independent which, in general, is not necessarily true. Here, we propose a simple, yet useful, proxy that modifies the outcome of any XAI feature ranking method allowing to account for the dependency among the features, and to reveal their impact on the outcome. The proposed method was applied to SHAP, as an example of XAI method which assume that the features are independent. For this purpose, several models were exploited for a well-known classification task (males versus females) using nine cardiac phenotypes extracted from cardiac magnetic resonance imaging as features. Principal component analysis and biological plausibility were employed to validate the proposed method. Our results showed that the proposed proxy could lead to a more robust list of informative features compared to the original SHAP in presence of collinearity.
Texto completo:
1
Coleções:
01-internacional
Base de dados:
MEDLINE
Idioma:
En
Ano de publicação:
2024
Tipo de documento:
Article