Your browser doesn't support javascript.
loading
CBCT-to-CT Synthesis for Cervical Cancer Adaptive Radiotherapy via U-Net-Based Model Hierarchically Trained with Hybrid Dataset.
Liu, Xi; Yang, Ruijie; Xiong, Tianyu; Yang, Xueying; Li, Wen; Song, Liming; Zhu, Jiarui; Wang, Mingqing; Cai, Jing; Geng, Lisheng.
Afiliação
  • Liu X; School of Physics, Beihang University, Beijing 102206, China.
  • Yang R; Department of Radiation Oncology, Cancer Center, Peking University Third Hospital, Beijing 100191, China.
  • Xiong T; Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China.
  • Yang X; Department of Radiation Oncology, Cancer Center, Peking University Third Hospital, Beijing 100191, China.
  • Li W; Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China.
  • Song L; School of Physics, Beihang University, Beijing 102206, China.
  • Zhu J; Department of Radiation Oncology, Cancer Center, Peking University Third Hospital, Beijing 100191, China.
  • Wang M; Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China.
  • Cai J; Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China.
  • Geng L; Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China.
Cancers (Basel) ; 15(22)2023 Nov 20.
Article em En | MEDLINE | ID: mdl-38001738
ABSTRACT

PURPOSE:

To develop a deep learning framework based on a hybrid dataset to enhance the quality of CBCT images and obtain accurate HU values. MATERIALS AND

METHODS:

A total of 228 cervical cancer patients treated in different LINACs were enrolled. We developed an encoder-decoder architecture with residual learning and skip connections. The model was hierarchically trained and validated on 5279 paired CBCT/planning CT images and tested on 1302 paired images. The mean absolute error (MAE), peak signal to noise ratio (PSNR), and structural similarity index (SSIM) were utilized to access the quality of the synthetic CT images generated by our model.

RESULTS:

The MAE between synthetic CT images generated by our model and planning CT was 10.93 HU, compared to 50.02 HU for the CBCT images. The PSNR increased from 27.79 dB to 33.91 dB, and the SSIM increased from 0.76 to 0.90. Compared with synthetic CT images generated by the convolution neural networks with residual blocks, our model had superior performance both in qualitative and quantitative aspects.

CONCLUSIONS:

Our model could synthesize CT images with enhanced image quality and accurate HU values. The synthetic CT images preserved the edges of tissues well, which is important for downstream tasks in adaptive radiotherapy.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article