Your browser doesn't support javascript.
loading
Reliable Delineation of Clinical Target Volumes for Cervical Cancer Radiotherapy on CT/MR Dual-Modality Images.
Sun, Ying; Wang, Yuening; Gan, Kexin; Wang, Yuxin; Chen, Ying; Ge, Yun; Yuan, Jie; Xu, Hanzi.
Afiliación
  • Sun Y; School of Electronic Science and Engineering, Nanjing University, Nanjing, China.
  • Wang Y; School of Electronic Science and Engineering, Nanjing University, Nanjing, China.
  • Gan K; School of Electronic Science and Engineering, Nanjing University, Nanjing, China.
  • Wang Y; School of Electronic Science and Engineering, Nanjing University, Nanjing, China.
  • Chen Y; School of Electronic Science and Engineering, Nanjing University, Nanjing, China.
  • Ge Y; School of Electronic Science and Engineering, Nanjing University, Nanjing, China.
  • Yuan J; School of Electronic Science and Engineering, Nanjing University, Nanjing, China. yuanjie@nju.edu.cn.
  • Xu H; Jiangsu Cancer Hospital, Nanjing, China. xuhanzi@njmu.edu.cn.
J Imaging Inform Med ; 37(2): 575-588, 2024 Apr.
Article en En | MEDLINE | ID: mdl-38343225
ABSTRACT
Accurate delineation of the clinical target volume (CTV) is a crucial prerequisite for safe and effective radiotherapy characterized. This study addresses the integration of magnetic resonance (MR) images to aid in target delineation on computed tomography (CT) images. However, obtaining MR images directly can be challenging. Therefore, we employ AI-based image generation techniques to "intelligentially generate" MR images from CT images to improve CTV delineation based on CT images. To generate high-quality MR images, we propose an attention-guided single-loop image generation model. The model can yield higher-quality images by introducing an attention mechanism in feature extraction and enhancing the loss function. Based on the generated MR images, we propose a CTV segmentation model fusing multi-scale features through image fusion and a hollow space pyramid module to enhance segmentation accuracy. The image generation model used in this study improves the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) from 14.87 and 0.58 to 16.72 and 0.67, respectively, and improves the feature distribution distance and learning-perception image similarity from 180.86 and 0.28 to 110.98 and 0.22, achieving higher quality image generation. The proposed segmentation method demonstrates high accuracy, compared with the FCN method, the intersection over union ratio and the Dice coefficient are improved from 0.8360 and 0.8998 to 0.9043 and 0.9473, respectively. Hausdorff distance and mean surface distance decreased from 5.5573 mm and 2.3269 mm to 4.7204 mm and 0.9397 mm, respectively, achieving clinically acceptable segmentation accuracy. Our method might reduce physicians' manual workload and accelerate the diagnosis and treatment process while decreasing inter-observer variability in identifying anatomical structures.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Tipo de estudio: Guideline Idioma: En Revista: J Imaging Inform Med Año: 2024 Tipo del documento: Article País de afiliación: China

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Tipo de estudio: Guideline Idioma: En Revista: J Imaging Inform Med Año: 2024 Tipo del documento: Article País de afiliación: China