Your browser doesn't support javascript.
loading
Shapley variable importance cloud for interpretable machine learning.
Ning, Yilin; Ong, Marcus Eng Hock; Chakraborty, Bibhas; Goldstein, Benjamin Alan; Ting, Daniel Shu Wei; Vaughan, Roger; Liu, Nan.
Afiliação
  • Ning Y; Centre for Quantitative Medicine, Duke-NUS Medical School, 8 College Road, Singapore 169857, Singapore.
  • Ong MEH; Programme in Health Services and Systems Research, Duke-NUS Medical School, 8 College Road, Singapore 169857, Singapore.
  • Chakraborty B; Health Services Research Centre, Singapore Health Services, 20 College Road, Singapore 169856, Singapore.
  • Goldstein BA; Department of Emergency Medicine, Singapore General Hospital, 1 Hospital Crescent Outram Road, Singapore 169608, Singapore.
  • Ting DSW; Centre for Quantitative Medicine, Duke-NUS Medical School, 8 College Road, Singapore 169857, Singapore.
  • Vaughan R; Programme in Health Services and Systems Research, Duke-NUS Medical School, 8 College Road, Singapore 169857, Singapore.
  • Liu N; Department of Statistics and Data Science, National University of Singapore, 6 Science Drive 2, Singapore 117546, Singapore.
Patterns (N Y) ; 3(4): 100452, 2022 Apr 08.
Article em En | MEDLINE | ID: mdl-35465224
Interpretable machine learning has been focusing on explaining final models that optimize performance. The state-of-the-art Shapley additive explanations (SHAP) locally explains the variable impact on individual predictions and has recently been extended to provide global assessments across the dataset. Our work further extends "global" assessments to a set of models that are "good enough" and are practically as relevant as the final model to a prediction task. The resulting Shapley variable importance cloud consists of Shapley-based importance measures from each good model and pools information across models to provide an overall importance measure, with uncertainty explicitly quantified to support formal statistical inference. We developed visualizations to highlight the uncertainty and to illustrate its implications to practical inference. Building on a common theoretical basis, our method seamlessly complements the widely adopted SHAP assessments of a single final model to avoid biased inference, which we demonstrate in two experiments using recidivism prediction data and clinical data.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2022 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2022 Tipo de documento: Article