Your browser doesn't support javascript.
loading
CBCT-based synthetic CT generation using generative adversarial networks with disentangled representation.
Liu, Jiwei; Yan, Hui; Cheng, Hanlin; Liu, Jianfei; Sun, Pengjian; Wang, Boyi; Mao, Ronghu; Du, Chi; Luo, Shengquan.
Afiliação
  • Liu J; School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China.
  • Yan H; Department of Radiation Oncology, National Clinical Research Center for Cancer, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.
  • Cheng H; School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China.
  • Liu J; School of Electrical Engineering and Automation, Anhui University, Hefei, China.
  • Sun P; School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China.
  • Wang B; School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China.
  • Mao R; Department of Radiation Oncology, The Affiliated Cancer Hospital of Zhengzhou University, Henan Cancer Hospital, Zhengzhou, China.
  • Du C; Cancer Center, The Second Peoples Hospital of Neijiang, Neijiang, China.
  • Luo S; School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China.
Quant Imaging Med Surg ; 11(12): 4820-4834, 2021 Dec.
Article em En | MEDLINE | ID: mdl-34888192
ABSTRACT

BACKGROUND:

Cone-beam computed tomography (CBCT) plays a key role in image-guided radiotherapy (IGRT), however its poor image quality limited its clinical application. In this study, we developed a deep-learning based approach to translate CBCT image to synthetic CT (sCT) image that preserves both CT image quality and CBCT anatomical structures.

METHODS:

A novel synthetic CT generative adversarial network (sCTGAN) was proposed for CBCT-to-CT translation via disentangled representation. The approach of disentangled representation was employed to extract the anatomical information shared by CBCT and CT image domains. Both on-board CBCT and planning CT of 40 patients were used for network learning and those of another 12 patients were used for testing. Accuracy of our network was quantitatively evaluated using a series of statistical metrics, including the peak signal-to-noise ratio (PSNR), mean structural similarity index (SSIM), mean absolute error (MAE), and root-mean-square error (RMSE). Effectiveness of our network was compared against three state-of-the-art CycleGAN-based methods.

RESULTS:

The PSNR, SSIM, MAE, and RMSE between sCT generated by sCTGAN and deformed planning CT (dpCT) were 34.12 dB, 0.86, 32.70 HU, and 60.53 HU, while the corresponding values between original CBCT and dpCT were 28.67 dB, 0.64, 70.56 HU, and 112.13 HU. The RMSE (60.53±14.38 HU) of sCT generated by sCTGAN was less than that of sCT generated by all the three comparing methods (72.40±16.03 HU by CycleGAN, 71.60±15.09 HU by CycleGAN-Unet512, 64.93±14.33 HU by CycleGAN-AG).

CONCLUSIONS:

The sCT generated by our sCTGAN network was closer to the ground truth (dpCT), in comparison to all the three comparing CycleGAN-based methods. It provides an effective way to generate high-quality sCT which has a wide application in IGRT and adaptive radiotherapy.
Palavras-chave

Texto completo: 1 Bases de dados: MEDLINE Idioma: En Revista: Quant Imaging Med Surg Ano de publicação: 2021 Tipo de documento: Article País de afiliação: China

Texto completo: 1 Bases de dados: MEDLINE Idioma: En Revista: Quant Imaging Med Surg Ano de publicação: 2021 Tipo de documento: Article País de afiliação: China