Your browser doesn't support javascript.
loading
Can physician judgment enhance model trustworthiness? A case study on predicting pathological lymph nodes in rectal cancer.
Kobayashi, Kazuma; Takamizawa, Yasuyuki; Miyake, Mototaka; Ito, Sono; Gu, Lin; Nakatsuka, Tatsuya; Akagi, Yu; Harada, Tatsuya; Kanemitsu, Yukihide; Hamamoto, Ryuji.
Afiliação
  • Kobayashi K; Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan. Electronic address: kazumko
  • Takamizawa Y; Department of Colorectal Surgery, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan. Electronic address: ytakamiz@ncc.go.jp.
  • Miyake M; Department of Diagnostic Radiology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan. Electronic address: mmiyake@ncc.go.jp.
  • Ito S; Department of Colorectal Surgery, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan. Electronic address: sitosrg1@tmd.ac.jp.
  • Gu L; Machine Intelligence for Medical Engineering Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; Research Center for Advanced Science and Technology, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8904, Japan. Electronic address: lin.g
  • Nakatsuka T; Department of Applied Electronics, Graduate School of Advanced Engineering, Tokyo University of Science, 6-3-1 Niijuku, Katsushika-ku, Tokyo 125-8585, Japan. Electronic address: 8123526@ed.tus.ac.jp.
  • Akagi Y; Department of Biomedical Informatics, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan. Electronic address: yu-akagi@g.ecc.u-tokyo.ac.jp.
  • Harada T; Machine Intelligence for Medical Engineering Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; Research Center for Advanced Science and Technology, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8904, Japan. Electronic address: harad
  • Kanemitsu Y; Department of Colorectal Surgery, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan. Electronic address: ykanemit@ncc.go.jp.
  • Hamamoto R; Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan. Electronic address: rhamamo
Artif Intell Med ; 154: 102929, 2024 Aug.
Article em En | MEDLINE | ID: mdl-38996696
ABSTRACT
Explainability is key to enhancing the trustworthiness of artificial intelligence in medicine. However, there exists a significant gap between physicians' expectations for model explainability and the actual behavior of these models. This gap arises from the absence of a consensus on a physician-centered evaluation framework, which is needed to quantitatively assess the practical benefits that effective explainability should offer practitioners. Here, we hypothesize that superior attention maps, as a mechanism of model explanation, should align with the information that physicians focus on, potentially reducing prediction uncertainty and increasing model reliability. We employed a multimodal transformer to predict lymph node metastasis of rectal cancer using clinical data and magnetic resonance imaging. We explored how well attention maps, visualized through a state-of-the-art technique, can achieve agreement with physician understanding. Subsequently, we compared two distinct approaches for estimating uncertainty a standalone estimation using only the variance of prediction probability, and a human-in-the-loop estimation that considers both the variance of prediction probability and the quantified agreement. Our findings revealed no significant advantage of the human-in-the-loop approach over the standalone one. In conclusion, this case study did not confirm the anticipated benefit of the explanation in enhancing model reliability. Superficial explanations could do more harm than good by misleading physicians into relying on uncertain predictions, suggesting that the current state of attention mechanisms should not be overestimated in the context of model explainability.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Neoplasias Retais / Julgamento / Metástase Linfática Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Neoplasias Retais / Julgamento / Metástase Linfática Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article