Your browser doesn't support javascript.
loading
Complementary information mutual learning for multimodality medical image segmentation.
Shen, Chuyun; Li, Wenhao; Chen, Haoqing; Wang, Xiaoling; Zhu, Fengping; Li, Yuxin; Wang, Xiangfeng; Jin, Bo.
Afiliação
  • Shen C; School of Computer Science and Technology, East China Normal University, Shanghai 200062, China. Electronic address: cyshen@stu.ecnu.edu.cn.
  • Li W; School of Data Science, The Chinese University of Hong Kong, Shenzhen Shenzhen Institute of Artificial Intelligence and Robotics for Society, Shenzhen 518172, China. Electronic address: liwenhao@cuhk.edu.cn.
  • Chen H; School of Computer Science and Technology, East China Normal University, Shanghai 200062, China. Electronic address: 51215901005@stu.ecnu.edu.cn.
  • Wang X; School of Computer Science and Technology, East China Normal University, Shanghai 200062, China. Electronic address: xlwang@cs.ecnu.edu.cn.
  • Zhu F; Huashan Hospital Fudan University, Shanghai 200040, China. Electronic address: zhufengping@fudan.edu.cn.
  • Li Y; Huashan Hospital Fudan University, Shanghai 200040, China. Electronic address: liyuxin@fudan.edu.cn.
  • Wang X; School of Computer Science and Technology, East China Normal University, Shanghai 200062, China. Electronic address: xfwang@cs.ecnu.edu.cn.
  • Jin B; School of Software Engineering, Shanghai Research Institute for Intelligent Autonomous Systems, Tongji University, Shanghai 200092, China. Electronic address: bjin@tongji.edu.cn.
Neural Netw ; 180: 106670, 2024 Sep 06.
Article em En | MEDLINE | ID: mdl-39299035
ABSTRACT
Radiologists must utilize medical images of multiple modalities for tumor segmentation and diagnosis due to the limitations of medical imaging technology and the diversity of tumor signals. This has led to the development of multimodal learning in medical image segmentation. However, the redundancy among modalities creates challenges for existing subtraction-based joint learning methods, such as misjudging the importance of modalities, ignoring specific modal information, and increasing cognitive load. These thorny issues ultimately decrease segmentation accuracy and increase the risk of overfitting. This paper presents the complementary information mutual learning (CIML) framework, which can mathematically model and address the negative impact of inter-modal redundant information. CIML adopts the idea of addition and removes inter-modal redundant information through inductive bias-driven task decomposition and message passing-based redundancy filtering. CIML first decomposes the multimodal segmentation task into multiple subtasks based on expert prior knowledge, minimizing the information dependence between modalities. Furthermore, CIML introduces a scheme in which each modality can extract information from other modalities additively through message passing. To achieve non-redundancy of extracted information, the redundant filtering is transformed into complementary information learning inspired by the variational information bottleneck. The complementary information learning procedure can be efficiently solved by variational inference and cross-modal spatial attention. Numerical results from the verification task and standard benchmarks indicate that CIML efficiently removes redundant information between modalities, outperforming SOTA methods regarding validation accuracy and segmentation effect. To emphasize, message-passing-based redundancy filtering allows neural network visualization techniques to visualize the knowledge relationship among different modalities, which reflects interpretability.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article