Your browser doesn't support javascript.
loading
Interpretable (not just posthoc-explainable) medical claims modeling for discharge placement to reduce preventable all-cause readmissions or death.
Chang, Ted L; Xia, Hongjing; Mahajan, Sonya; Mahajan, Rohit; Maisog, Joe; Vattikuti, Shashaank; Chow, Carson C; Chang, Joshua C.
Afiliación
  • Chang TL; Sound Prediction Inc., Columbus, OH, United States of America.
  • Xia H; Mederrata Research Inc., Columbus, OH, United States of America.
  • Mahajan S; Sound Prediction Inc., Columbus, OH, United States of America.
  • Mahajan R; Mederrata Research Inc., Columbus, OH, United States of America.
  • Maisog J; Sound Prediction Inc., Columbus, OH, United States of America.
  • Vattikuti S; Mederrata Research Inc., Columbus, OH, United States of America.
  • Chow CC; Sound Prediction Inc., Columbus, OH, United States of America.
  • Chang JC; Mederrata Research Inc., Columbus, OH, United States of America.
PLoS One ; 19(5): e0302871, 2024.
Article en En | MEDLINE | ID: mdl-38722929
ABSTRACT
We developed an inherently interpretable multilevel Bayesian framework for representing variation in regression coefficients that mimics the piecewise linearity of ReLU-activated deep neural networks. We used the framework to formulate a survival model for using medical claims to predict hospital readmission and death that focuses on discharge placement, adjusting for confounding in estimating causal local average treatment effects. We trained the model on a 5% sample of Medicare beneficiaries from 2008 and 2011, based on their 2009-2011 inpatient episodes (approximately 1.2 million), and then tested the model on 2012 episodes (approximately 400 thousand). The model scored an out-of-sample AUROC of approximately 0.75 on predicting all-cause readmissions-defined using official Centers for Medicare and Medicaid Services (CMS) methodology-or death within 30-days of discharge, being competitive against XGBoost and a Bayesian deep neural network, demonstrating that one need-not sacrifice interpretability for accuracy. Crucially, as a regression model, it provides what blackboxes cannot-its exact gold-standard global interpretation, explicitly defining how the model performs its internal "reasoning" for mapping the input data features to predictions. In doing so, we identify relative risk factors and quantify the effect of discharge placement. We also show that the posthoc explainer SHAP provides explanations that are inconsistent with the ground truth model reasoning that our model readily admits.
Asunto(s)

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Alta del Paciente / Readmisión del Paciente / Teorema de Bayes / Medicare Límite: Aged / Aged80 / Female / Humans / Male País/Región como asunto: America do norte Idioma: En Revista: PLoS One Asunto de la revista: CIENCIA / MEDICINA Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Alta del Paciente / Readmisión del Paciente / Teorema de Bayes / Medicare Límite: Aged / Aged80 / Female / Humans / Male País/Región como asunto: America do norte Idioma: En Revista: PLoS One Asunto de la revista: CIENCIA / MEDICINA Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos