Your browser doesn't support javascript.
loading
Integrating a deep neural network and Transformer architecture for the automatic segmentation and survival prediction in cervical cancer.
Zhu, Shitao; Lin, Ling; Liu, Qin; Liu, Jing; Song, Yanwen; Xu, Qin.
Affiliation
  • Zhu S; College of Computer and Data Science, Fuzhou University, Fuzhou, China.
  • Lin L; Department of Gynecology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, China.
  • Liu Q; Department of Clinical Oncology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, China.
  • Liu J; Department of Gynecology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, China.
  • Song Y; Department of Radiation Oncology, Xiamen Humanity Hospital, Xiamen, China.
  • Xu Q; Department of Gynecology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, China.
Quant Imaging Med Surg ; 14(8): 5408-5419, 2024 Aug 01.
Article de En | MEDLINE | ID: mdl-39144008
ABSTRACT

Background:

Automated tumor segmentation and survival prediction are critical to clinical diagnosis and treatment. This study aimed to develop deep-learning models for automatic tumor segmentation and survival prediction in magnetic resonance imaging (MRI) of cervical cancer (CC) by combining deep neural networks and Transformer architecture.

Methods:

This study included 406 patients with CC, each with comprehensive clinical information and MRI scans. We randomly divided patients into training, validation, and independent test cohorts in a 622 ratio. During the model training, we employed two architecture types one being a hybrid model combining convolutional neural network (CNN) and ransformer (CoTr) and one of pure CNNs. For survival prediction, the hybrid model combined tumor image features extracted by segmentation models with clinical information. The performance of the segmentation models was evaluated using the Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD95). The performance of the survival models was assessed using the concordance index.

Results:

The CoTr model performed well in both contrast-enhanced T1-weighted (ceT1W) and T2-weighted (T2W) imaging segmentation tasks, with average DSCs of 0.827 and 0.820, respectively, which outperformed other the CNN models such as U-Net (DSC 0.807 and 0.808), attention U-Net (DSC 0.814 and 0.811), and V-Net (DSC 0.805 and 0.807). For survival prediction, the proposed deep-learning model significantly outperformed traditional methods, yielding a concordance index of 0.732. Moreover, it effectively divided patients into low-risk and high-risk groups for disease progression (P<0.001).

Conclusions:

Combining Transformer architecture with a CNN can improve MRI tumor segmentation, and this deep-learning model excelled in the survival prediction of patients with CC as compared to traditional methods.
Mots clés

Texte intégral: 1 Collection: 01-internacional Base de données: MEDLINE Langue: En Journal: Quant Imaging Med Surg Année: 2024 Type de document: Article Pays d'affiliation: Chine Pays de publication: Chine

Texte intégral: 1 Collection: 01-internacional Base de données: MEDLINE Langue: En Journal: Quant Imaging Med Surg Année: 2024 Type de document: Article Pays d'affiliation: Chine Pays de publication: Chine