Your browser doesn't support javascript.
loading
Deep learning for head and neck semi-supervised semantic segmentation.
Luan, Shunyao; Ding, Yi; Shao, Jiakang; Zou, Bing; Yu, Xiao; Qin, Nannan; Zhu, Benpeng; Wei, Wei; Xue, Xudong.
Afiliação
  • Luan S; School of Integrated Circuits, Laboratory for optoelectronics, Huazhong University of Science and Technology, Wuhan, People's Republic of China.
  • Ding Y; Department of Radiation Oncology, Hubei Cancer Hospital, TongJi Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, People's Republic of China.
  • Shao J; Department of Radiation Oncology, Hubei Cancer Hospital, TongJi Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, People's Republic of China.
  • Zou B; School of Integrated Circuits, Laboratory for optoelectronics, Huazhong University of Science and Technology, Wuhan, People's Republic of China.
  • Yu X; Department of Oncology, The Second Affiliated Hospital of Nanchang University, Nanchang, People's Republic of China.
  • Qin N; Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, People's Republic of China.
  • Zhu B; The First Affiliated Hospital of Bengbu Medical College, Bengbu, People's Republic of China.
  • Wei W; School of Integrated Circuits, Laboratory for optoelectronics, Huazhong University of Science and Technology, Wuhan, People's Republic of China.
  • Xue X; Department of Radiation Oncology, Hubei Cancer Hospital, TongJi Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, People's Republic of China.
Phys Med Biol ; 69(5)2024 Feb 19.
Article em En | MEDLINE | ID: mdl-38306968
ABSTRACT
Objective. Radiation therapy (RT) represents a prevalent therapeutic modality for head and neck (H&N) cancer. A crucial phase in RT planning involves the precise delineation of organs-at-risks (OARs), employing computed tomography (CT) scans. Nevertheless, the manual delineation of OARs is a labor-intensive process, necessitating individual scrutiny of each CT image slice, not to mention that a standard CT scan comprises hundreds of such slices. Furthermore, there is a significant domain shift between different institutions' H&N data, which makes traditional semi-supervised learning strategies susceptible to confirmation bias. Therefore, effectively using unlabeled datasets to support annotated datasets for model training has become a critical issue for preventing domain shift and confirmation bias.Approach. In this work, we proposed an innovative cross-domain orthogon-based-perspective consistency (CD-OPC) strategy within a two-branch collaborative training framework, which compels the two sub-networks to acquire valuable features from unrelated perspectives. More specifically, a novel generative pretext task cross-domain prediction (CDP) was designed for learning inherent properties of CT images. Then this prior knowledge was utilized to promote the independent learning of distinct features by the two sub-networks from identical inputs, thereby enhancing the perceptual capabilities of the sub-networks through orthogon-based pseudo-labeling knowledge transfer.Main results. Our CD-OPC model was trained on H&N datasets from nine different institutions, and validated on the four local intuitions' H&N datasets. Among all datasets CD-OPC achieved more advanced performance than other semi-supervised semantic segmentation algorithms.Significance. The CD-OPC method successfully mitigates domain shift and prevents network collapse. In addition, it enhances the network's perceptual abilities, and generates more reliable predictions, thereby further addressing the confirmation bias issue.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Aprendizado Profundo / Neoplasias de Cabeça e Pescoço Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Aprendizado Profundo / Neoplasias de Cabeça e Pescoço Idioma: En Ano de publicação: 2024 Tipo de documento: Article