Your browser doesn't support javascript.
loading
Interpretable machine learning models for hospital readmission prediction: a two-step extracted regression tree approach.
Gao, Xiaoquan; Alam, Sabriya; Shi, Pengyi; Dexter, Franklin; Kong, Nan.
Afiliación
  • Gao X; School of Industrial Engineering, Purdue University, West Lafayette, USA.
  • Alam S; Department of Electrical Engineering and Computer Sciences, UC Berkeley, Berkeley, USA.
  • Shi P; Krannert School of Management, Purdue University, West Lafayette, USA. shi178@purdue.edu.
  • Dexter F; Department of Anesthesia, University of Iowa, Iowa, USA.
  • Kong N; Weldon School of Biomedical Engineering, Purdue University, West Lafayette, USA.
BMC Med Inform Decis Mak ; 23(1): 104, 2023 06 05.
Article en En | MEDLINE | ID: mdl-37277767
ABSTRACT

BACKGROUND:

Advanced machine learning models have received wide attention in assisting medical decision making due to the greater accuracy they can achieve. However, their limited interpretability imposes barriers for practitioners to adopt them. Recent advancements in interpretable machine learning tools allow us to look inside the black box of advanced prediction methods to extract interpretable models while maintaining similar prediction accuracy, but few studies have investigated the specific hospital readmission prediction problem with this spirit.

METHODS:

Our goal is to develop a machine-learning (ML) algorithm that can predict 30- and 90- day hospital readmissions as accurately as black box algorithms while providing medically interpretable insights into readmission risk factors. Leveraging a state-of-art interpretable ML model, we use a two-step Extracted Regression Tree approach to achieve this goal. In the first step, we train a black box prediction algorithm. In the second step, we extract a regression tree from the output of the black box algorithm that allows direct interpretation of medically relevant risk factors. We use data from a large teaching hospital in Asia to learn the ML model and verify our two-step approach.

RESULTS:

The two-step method can obtain similar prediction performance as the best black box model, such as Neural Networks, measured by three metrics accuracy, the Area Under the Curve (AUC) and the Area Under the Precision-Recall Curve (AUPRC), while maintaining interpretability. Further, to examine whether the prediction results match the known medical insights (i.e., the model is truly interpretable and produces reasonable results), we show that key readmission risk factors extracted by the two-step approach are consistent with those found in the medical literature.

CONCLUSIONS:

The proposed two-step approach yields meaningful prediction results that are both accurate and interpretable. This study suggests a viable means to improve the trust of machine learning based models in clinical practice for predicting readmissions through the two-step approach.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Readmisión del Paciente / Aprendizaje Automático Tipo de estudio: Etiology_studies / Prognostic_studies / Risk_factors_studies Límite: Humans Idioma: En Revista: BMC Med Inform Decis Mak Asunto de la revista: INFORMATICA MEDICA Año: 2023 Tipo del documento: Article País de afiliación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Readmisión del Paciente / Aprendizaje Automático Tipo de estudio: Etiology_studies / Prognostic_studies / Risk_factors_studies Límite: Humans Idioma: En Revista: BMC Med Inform Decis Mak Asunto de la revista: INFORMATICA MEDICA Año: 2023 Tipo del documento: Article País de afiliación: Estados Unidos