Your browser doesn't support javascript.
loading
Balancing accuracy and interpretability of machine learning approaches for radiation treatment outcomes modeling.
Luo, Yi; Tseng, Huan-Hsin; Cui, Sunan; Wei, Lise; Ten Haken, Randall K; El Naqa, Issam.
Afiliação
  • Luo Y; Department of Radiation Oncology, University of Michigan, 519 W William Street, Ann Arbor, MI, USA.
  • Tseng HH; Department of Radiation Oncology, University of Michigan, 519 W William Street, Ann Arbor, MI, USA.
  • Cui S; Department of Radiation Oncology, University of Michigan, 519 W William Street, Ann Arbor, MI, USA.
  • Wei L; Department of Radiation Oncology, University of Michigan, 519 W William Street, Ann Arbor, MI, USA.
  • Ten Haken RK; Department of Radiation Oncology, University of Michigan, 519 W William Street, Ann Arbor, MI, USA.
  • El Naqa I; Department of Radiation Oncology, University of Michigan, 519 W William Street, Ann Arbor, MI, USA.
BJR Open ; 1(1): 20190021, 2019.
Article em En | MEDLINE | ID: mdl-33178948
ABSTRACT
Radiation outcomes prediction (ROP) plays an important role in personalized prescription and adaptive radiotherapy. A clinical decision may not only depend on an accurate radiation outcomes' prediction, but also needs to be made based on an informed understanding of the relationship among patients' characteristics, radiation response and treatment plans. As more patients' biophysical information become available, machine learning (ML) techniques will have a great potential for improving ROP. Creating explainable ML methods is an ultimate task for clinical practice but remains a challenging one. Towards complete explainability, the interpretability of ML approaches needs to be first explored. Hence, this review focuses on the application of ML techniques for clinical adoption in radiation oncology by balancing accuracy with interpretability of the predictive model of interest. An ML algorithm can be generally classified into an interpretable (IP) or non-interpretable (NIP) ("black box") technique. While the former may provide a clearer explanation to aid clinical decision-making, its prediction performance is generally outperformed by the latter. Therefore, great efforts and resources have been dedicated towards balancing the accuracy and the interpretability of ML approaches in ROP, but more still needs to be done. In this review, current progress to increase the accuracy for IP ML approaches is introduced, and major trends to improve the interpretability and alleviate the "black box" stigma of ML in radiation outcomes modeling are summarized. Efforts to integrate IP and NIP ML approaches to produce predictive models with higher accuracy and interpretability for ROP are also discussed.

Texto completo: 1 Base de dados: MEDLINE Tipo de estudo: Prognostic_studies Idioma: En Ano de publicação: 2019 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Tipo de estudo: Prognostic_studies Idioma: En Ano de publicação: 2019 Tipo de documento: Article