Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Comput Med Imaging Graph ; 117: 102431, 2024 Sep 04.
Artículo en Inglés | MEDLINE | ID: mdl-39243464

RESUMEN

CycleGAN has been leveraged to synthesize a CT image from an available MR image after trained on unpaired data. Due to the lack of direct constraints between the synthetic and the input images, CycleGAN cannot guarantee structural consistency and often generates inaccurate mappings that shift the anatomy, which is highly undesirable for downstream clinical applications such as MRI-guided radiotherapy treatment planning and PET/MRI attenuation correction. In this paper, we propose a cycle-consistent and semantics-preserving generative adversarial network, referred as CycleSGAN, for unpaired MR-to-CT image synthesis. Our design features a novel and generic way to incorporate semantic information into CycleGAN. This is done by designing a pair of three-player games within the CycleGAN framework where each three-player game consists of one generator and two discriminators to formulate two distinct types of adversarial learning: appearance adversarial learning and structure adversarial learning. These two types of adversarial learning are alternately trained to ensure both realistic image synthesis and semantic structure preservation. Results on unpaired hip MR-to-CT image synthesis show that our method produces better synthetic CT images in both accuracy and visual quality as compared to other state-of-the-art (SOTA) unpaired MR-to-CT image synthesis methods.

2.
Radiat Oncol ; 19(1): 37, 2024 Mar 14.
Artículo en Inglés | MEDLINE | ID: mdl-38486193

RESUMEN

BACKGROUND: Magnetic resonance imaging (MRI) plays an increasingly important role in radiotherapy, enhancing the accuracy of target and organs at risk delineation, but the absence of electron density information limits its further clinical application. Therefore, the aim of this study is to develop and evaluate a novel unsupervised network (cycleSimulationGAN) for unpaired MR-to-CT synthesis. METHODS: The proposed cycleSimulationGAN in this work integrates contour consistency loss function and channel-wise attention mechanism to synthesize high-quality CT-like images. Specially, the proposed cycleSimulationGAN constrains the structural similarity between the synthetic and input images for better structural retention characteristics. Additionally, we propose to equip a novel channel-wise attention mechanism based on the traditional generator of GAN to enhance the feature representation capability of deep network and extract more effective features. The mean absolute error (MAE) of Hounsfield Units (HU), peak signal-to-noise ratio (PSNR), root-mean-square error (RMSE) and structural similarity index (SSIM) were calculated between synthetic CT (sCT) and ground truth (GT) CT images to quantify the overall sCT performance. RESULTS: One hundred and sixty nasopharyngeal carcinoma (NPC) patients who underwent volumetric-modulated arc radiotherapy (VMAT) were enrolled in this study. The generated sCT of our method were more consistent with the GT compared with other methods in terms of visual inspection. The average MAE, RMSE, PSNR, and SSIM calculated over twenty patients were 61.88 ± 1.42, 116.85 ± 3.42, 36.23 ± 0.52 and 0.985 ± 0.002 for the proposed method. The four image quality assessment metrics were significantly improved by our approach compared to conventional cycleGAN, the proposed cycleSimulationGAN produces significantly better synthetic results except for SSIM in bone. CONCLUSIONS: We developed a novel cycleSimulationGAN model that can effectively create sCT images, making them comparable to GT images, which could potentially benefit the MRI-based treatment planning.


Asunto(s)
Neoplasias Nasofaríngeas , Cuello , Humanos , Cabeza , Imagen por Resonancia Magnética , Tomografía Computarizada por Rayos X
3.
Med Phys ; 2024 Aug 13.
Artículo en Inglés | MEDLINE | ID: mdl-39137294

RESUMEN

BACKGROUND: The use of magnetic resonance (MR) imaging for proton therapy treatment planning is gaining attention as a highly effective method for guidance. At the core of this approach is the generation of computed tomography (CT) images from MR scans. However, the critical issue in this process is accurately aligning the MR and CT images, a task that becomes particularly challenging in frequently moving body areas, such as the head-and-neck. Misalignments in these images can result in blurred synthetic CT (sCT) images, adversely affecting the precision and effectiveness of the treatment planning. PURPOSE: This study introduces a novel network that cohesively unifies image generation and registration processes to enhance the quality and anatomical fidelity of sCTs derived from better-aligned MR images. METHODS: The approach synergizes a generation network (G) with a deformable registration network (R), optimizing them jointly in MR-to-CT synthesis. This goal is achieved by alternately minimizing the discrepancies between the generated/registered CT images and their corresponding reference CT counterparts. The generation network employs a UNet architecture, while the registration network leverages an implicit neural representation (INR) of the displacement vector fields (DVFs). We validated this method on a dataset comprising 60 head-and-neck patients, reserving 12 cases for holdout testing. RESULTS: Compared to the baseline Pix2Pix method with MAE 124.95 ± $\pm$ 30.74 HU, the proposed technique demonstrated 80.98 ± $\pm$ 7.55 HU. The unified translation-registration network produced sharper and more anatomically congruent outputs, showing superior efficacy in converting MR images to sCTs. Additionally, from a dosimetric perspective, the plan recalculated on the resulting sCTs resulted in a remarkably reduced discrepancy to the reference proton plans. CONCLUSIONS: This study conclusively demonstrates that a holistic MR-based CT synthesis approach, integrating both image-to-image translation and deformable registration, significantly improves the precision and quality of sCT generation, particularly for the challenging body area with varied anatomic changes between corresponding MR and CT.

4.
Int J Comput Assist Radiol Surg ; 18(1): 149-156, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-35984606

RESUMEN

PURPOSE: CycleGAN and its variants are widely used in medical image synthesis, which can use unpaired data for medical image synthesis. The most commonly used method is to use a Generative Adversarial Network (GAN) model to process 2D slices and thereafter concatenate all of these slices to 3D medical images. Nevertheless, these methods always bring about spatial inconsistencies in contiguous slices. We offer a new model based on the CycleGAN to work out this problem, which can achieve high-quality conversion from magnetic resonance (MR) to computed tomography (CT) images. METHODS: To achieve spatial consistencies of 3D medical images and avoid the memory-heavy 3D convolutions, we reorganized the adjacent 3 slices into a 2.5D slice as the input image. Further, we propose a U-Net discriminator network to improve accuracy, which can perceive input objects locally and globally. Then, the model uses Content-Aware ReAssembly of Features (CARAFE) upsampling, which has a large field of view and content awareness takes the place of using a settled kernel for all samples. RESULTS: The mean absolute error (MAE), peak-signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) for double U-Net CycleGAN generated 3D image synthesis are 74.56±10.02, 27.12±0.71 and 0.84±0.03, respectively. Our method achieves preferable results than state-of-the-art methods. CONCLUSION: The experiment results indicate our method can realize the conversion of MR to CT images using ill-sorted pair data, and achieves preferable results than state-of-the-art methods. Compared with 3D CycleGAN, it can synthesize better 3D CT images with less computation and memory.


Asunto(s)
Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Imagen por Resonancia Magnética , Espectroscopía de Resonancia Magnética
5.
Comput Methods Programs Biomed ; 237: 107571, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37156020

RESUMEN

BACKGROUND: Computed tomography (CT) and magnetic resonance imaging (MRI) are the mainstream imaging technologies for clinical practice. CT imaging can reveal high-quality anatomical and physiopathological structures, especially bone tissue, for clinical diagnosis. MRI provides high resolution in soft tissue and is sensitive to lesions. CT combined with MRI diagnosis has become a regular image-guided radiation treatment plan. METHODS: In this paper, to reduce the dose of radiation exposure in CT examinations and ameliorate the limitations of traditional virtual imaging technologies, we propose a Generative MRI-to-CT transformation method with structural perceptual supervision. Even though structural reconstruction is structurally misaligned in the MRI-CT dataset registration, our proposed method can better align structural information of synthetic CT (sCT) images to input MRI images while simulating the modality of CT in the MRI-to-CT cross-modality transformation. RESULTS: We retrieved a total of 3416 brain MRI-CT paired images as the train/test dataset, including 1366 train images of 10 patients and 2050 test images of 15 patients. Several methods (the baseline methods and the proposed method) were evaluated by the HU difference map, HU distribution, and various similarity metrics, including the mean absolute error (MAE), structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), and normalized cross-correlation (NCC). In our quantitative experimental results, the proposed method achieves the lowest MAE mean of 0.147, highest PSNR mean of 19.27, and NCC mean of 0.431 in the overall CT test dataset. CONCLUSIONS: In conclusion, both qualitative and quantitative results of synthetic CT validate that the proposed method can preserve higher similarity of structural information of the bone tissue of target CT than the baseline methods. Furthermore, the proposed method provides better HU intensity reconstruction for simulating the distribution of the CT modality. The experimental estimation indicates that the proposed method is worth further investigation.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Radioterapia Guiada por Imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen
6.
Biomed Phys Eng Express ; 7(2)2021 02 24.
Artículo en Inglés | MEDLINE | ID: mdl-33545707

RESUMEN

Background and purpose.Replacing CT imaging with MR imaging for MR-only radiotherapy has sparked the interest of many scientists and is being increasingly adopted in radiation oncology. Although many studies have focused on generating CT images from MR images, only models on data with the same dataset were tested. Therefore, how well the trained model will work for data from different hospitals and MR protocols is still unknown. In this study, we addressed the model generalization problem for the MR-to-CT conversion task.Materials and methods.Brain T2 MR and corresponding CT images were collected from SZSPH (source domain dataset), brain T1-FLAIR, T1-POST MR, and corresponding CT images were collected from The University of Texas Southwestern (UTSW) (target domain dataset). To investigate the model's generalizability ability, four potential solutions were proposed: source model, target model, combined model, and adapted model. All models were trained using the CycleGAN network. The source model was trained with a source domain dataset from scratch and tested with a target domain dataset. The target model was trained with a target domain dataset and tested with a target domain dataset. The combined model was trained with both source domain and target domain datasets, and tested with the target domain dataset. The adapted model used a transfer learning strategy to train a CycleGAN model with a source domain dataset and retrain the pre-trained model with a target domain dataset. MAE, RMSE, PSNR, and SSIM were used to quantitatively evaluate model performance on a target domain dataset.Results.The adapted model achieved best quantitative results of 74.56 ± 8.61, 193.18 ± 17.98, 28.30 ± 0.83, and 0.84 ± 0.01 for MAE, RMSE, PSNR, and SSIM using the T1-FLAIR dataset and 74.89 ± 15.64, 195.73 ± 31.29, 27.72 ± 1.43, and 0.83 ± 0.04 for MAE, RMSE, PSNR, and SSIM using the T1-POST dataset. The source model had the poorest performance.Conclusions.This work indicates high generalization ability to generate synthetic CT images from small training datasets of MR images using pre-trained CycleGAN. The quantitative results of the test data, including different scanning protocols and different acquisition centers, indicated the proof of this concept.


Asunto(s)
Aprendizaje Profundo , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética , Tomografía Computarizada por Rayos X
7.
Comput Biol Med ; 136: 104763, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34449305

RESUMEN

Medical image acquisition plays a significant role in the diagnosis and management of diseases. Magnetic Resonance (MR) and Computed Tomography (CT) are considered two of the most popular modalities for medical image acquisition. Some considerations, such as cost and radiation dose, may limit the acquisition of certain image modalities. Therefore, medical image synthesis can be used to generate required medical images without actual acquisition. In this paper, we propose a paired-unpaired Unsupervised Attention Guided Generative Adversarial Network (uagGAN) model to translate MR images to CT images and vice versa. The uagGAN model is pre-trained with a paired dataset for initialization and then retrained on an unpaired dataset using a cascading process. In the paired pre-training stage, we enhance the loss function of our model by combining the Wasserstein GAN adversarial loss function with a new combination of non-adversarial losses (content loss and L1) to generate fine structure images. This will ensure global consistency, and better capture of the high and low frequency details of the generated images. The uagGAN model is employed as it generates more accurate and sharper images through the production of attention masks. Knowledge from a non-medical pre-trained model is also transferred to the uagGAN model for improved learning and better image translation performance. Quantitative evaluation and qualitative perceptual analysis by radiologists indicate that employing transfer learning with the proposed paired-unpaired uagGAN model can achieve better performance as compared to other rival image-to-image translation models.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Atención , Encéfalo/diagnóstico por imagen , Aprendizaje Automático , Espectroscopía de Resonancia Magnética
8.
Appl Sci (Basel) ; 11(4): 1667, 2021 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-33763236

RESUMEN

Synthesising computed tomography (CT) images from magnetic resonance images (MRI) plays an important role in the field of medical image analysis, both for quantification and diagnostic purposes. Convolutional neural networks (CNNs) have achieved state-of-the-art results in image-to-image translation for brain applications. However, synthesising whole-body images remains largely uncharted territory involving many challenges, including large image size and limited field of view, complex spatial context, and anatomical differences between images acquired at different times. We propose the use of an uncertainty-aware multi-channel multi-resolution 3D cascade network specifically aiming for whole-body MR to CT synthesis. The Mean Absolute Error on the synthetic CT generated with the MultiRes unc network (73.90 HU) is compared to multiple baseline CNNs like 3D U-Net (92.89 HU), HighRes3DNet (89.05 HU) and deep boosted regression (77.58 HU) and shows superior synthesis performance. We ultimately exploit the extrapolation properties of the MultiRes networks on sub-regions of the body.

9.
Med Image Anal ; 71: 102079, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-33951598

RESUMEN

The assessment of the quality of synthesised/pseudo Computed Tomography (pCT) images is commonly measured by an intensity-wise similarity between the ground truth CT and the pCT. However, when using the pCT as an attenuation map (µ-map) for PET reconstruction in Positron Emission Tomography Magnetic Resonance Imaging (PET/MRI) minimising the error between pCT and CT neglects the main objective of predicting a pCT that when used as µ-map reconstructs a pseudo PET (pPET) which is as similar as possible to the gold standard CT-derived PET reconstruction. This observation motivated us to propose a novel multi-hypothesis deep learning framework explicitly aimed at PET reconstruction application. A convolutional neural network (CNN) synthesises pCTs by minimising a combination of the pixel-wise error between pCT and CT and a novel metric-loss that itself is defined by a CNN and aims to minimise consequent PET residuals. Training is performed on a database of twenty 3D MR/CT/PET brain image pairs. Quantitative results on a fully independent dataset of twenty-three 3D MR/CT/PET image pairs show that the network is able to synthesise more accurate pCTs. The Mean Absolute Error on the pCT (110.98 HU ± 19.22 HU) compared to a baseline CNN (172.12 HU ± 19.61 HU) and a multi-atlas propagation approach (153.40 HU ± 18.68 HU), and subsequently lead to a significant improvement in the PET reconstruction error (4.74% ± 1.52% compared to baseline 13.72% ± 2.48% and multi-atlas propagation 6.68% ± 2.06%).


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Conducta Imitativa , Humanos , Imagen por Resonancia Magnética , Tomografía de Emisión de Positrones , Tomografía Computarizada por Rayos X
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA