Your browser doesn't support javascript.
loading
Deep learning for whole-body medical image generation.
Schaefferkoetter, Joshua; Yan, Jianhua; Moon, Sangkyu; Chan, Rosanna; Ortega, Claudia; Metser, Ur; Berlin, Alejandro; Veit-Haibach, Patrick.
Afiliación
  • Schaefferkoetter J; Siemens Medical Solutions USA, Inc., 810 Innovation Drive, Knoxville, TN, 37932, USA. joshua.schaefferkoetter@siemens.com.
  • Yan J; Joint Department of Medical Imaging, Princess Margaret Cancer Centre, Mount Sinai Hospital and Women's College Hospital, University of Toronto, University Health Network, 610 University Ave, Toronto, Ontario, M5G 2M9, Canada. joshua.schaefferkoetter@siemens.com.
  • Moon S; Shanghai Key Laboratory for Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, 201318, China.
  • Chan R; Joint Department of Medical Imaging, Princess Margaret Cancer Centre, Mount Sinai Hospital and Women's College Hospital, University of Toronto, University Health Network, 610 University Ave, Toronto, Ontario, M5G 2M9, Canada.
  • Ortega C; Joint Department of Medical Imaging, Princess Margaret Cancer Centre, Mount Sinai Hospital and Women's College Hospital, University of Toronto, University Health Network, 610 University Ave, Toronto, Ontario, M5G 2M9, Canada.
  • Metser U; Joint Department of Medical Imaging, Princess Margaret Cancer Centre, Mount Sinai Hospital and Women's College Hospital, University of Toronto, University Health Network, 610 University Ave, Toronto, Ontario, M5G 2M9, Canada.
  • Berlin A; Joint Department of Medical Imaging, Princess Margaret Cancer Centre, Mount Sinai Hospital and Women's College Hospital, University of Toronto, University Health Network, 610 University Ave, Toronto, Ontario, M5G 2M9, Canada.
  • Veit-Haibach P; Radiation Medicine Program, Princess Margaret Cancer Centre, University Health Network, Toronto, ON, Canada.
Eur J Nucl Med Mol Imaging ; 48(12): 3817-3826, 2021 11.
Article en En | MEDLINE | ID: mdl-34021779
ABSTRACT

BACKGROUND:

Artificial intelligence (AI) algorithms based on deep convolutional networks have demonstrated remarkable success for image transformation tasks. State-of-the-art results have been achieved by generative adversarial networks (GANs) and training approaches which do not require paired data. Recently, these techniques have been applied in the medical field for cross-domain image translation.

PURPOSE:

This study investigated deep learning transformation in medical imaging. It was motivated to identify generalizable methods which would satisfy the simultaneous requirements of quality and anatomical accuracy across the entire human body. Specifically, whole-body MR patient data acquired on a PET/MR system were used to generate synthetic CT image volumes. The capacity of these synthetic CT data for use in PET attenuation correction (AC) was evaluated and compared to current MR-based attenuation correction (MR-AC) methods, which typically use multiphase Dixon sequences to segment various tissue types. MATERIALS AND

METHODS:

This work aimed to investigate the technical performance of a GAN system for general MR-to-CT volumetric transformation and to evaluate the performance of the generated images for PET AC. A dataset comprising matched, same-day PET/MR and PET/CT patient scans was used for validation.

RESULTS:

A combination of training techniques was used to produce synthetic images which were of high-quality and anatomically accurate. Higher correlation was found between the values of mu maps calculated directly from CT data and those derived from the synthetic CT images than those from the default segmented Dixon approach. Over the entire body, the total amounts of reconstructed PET activities were similar between the two MR-AC methods, but the synthetic CT method yielded higher accuracy for quantifying the tracer uptake in specific regions.

CONCLUSION:

The findings reported here demonstrate the feasibility of this technique and its potential to improve certain aspects of attenuation correction for PET/MR systems. Moreover, this work may have larger implications for establishing generalized methods for inter-modality, whole-body transformation in medical imaging. Unsupervised deep learning techniques can produce high-quality synthetic images, but additional constraints may be needed to maintain medical integrity in the generated data.
Asunto(s)
Palabras clave

Texto completo: 1 Banco de datos: MEDLINE Asunto principal: Aprendizaje Profundo Tipo de estudio: Prognostic_studies Límite: Humans Idioma: En Año: 2021 Tipo del documento: Article

Texto completo: 1 Banco de datos: MEDLINE Asunto principal: Aprendizaje Profundo Tipo de estudio: Prognostic_studies Límite: Humans Idioma: En Año: 2021 Tipo del documento: Article