Shapley variable importance cloud for interpretable machine learning.
Patterns (N Y)
; 3(4): 100452, 2022 Apr 08.
Article
em En
| MEDLINE
| ID: mdl-35465224
Interpretable machine learning has been focusing on explaining final models that optimize performance. The state-of-the-art Shapley additive explanations (SHAP) locally explains the variable impact on individual predictions and has recently been extended to provide global assessments across the dataset. Our work further extends "global" assessments to a set of models that are "good enough" and are practically as relevant as the final model to a prediction task. The resulting Shapley variable importance cloud consists of Shapley-based importance measures from each good model and pools information across models to provide an overall importance measure, with uncertainty explicitly quantified to support formal statistical inference. We developed visualizations to highlight the uncertainty and to illustrate its implications to practical inference. Building on a common theoretical basis, our method seamlessly complements the widely adopted SHAP assessments of a single final model to avoid biased inference, which we demonstrate in two experiments using recidivism prediction data and clinical data.
Texto completo:
1
Base de dados:
MEDLINE
Idioma:
En
Ano de publicação:
2022
Tipo de documento:
Article