Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Radiat Oncol ; 19(1): 61, 2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38773620

RESUMO

PURPOSE: Accurate deformable registration of magnetic resonance imaging (MRI) scans containing pathologies is challenging due to changes in tissue appearance. In this paper, we developed a novel automated three-dimensional (3D) convolutional U-Net based deformable image registration (ConvUNet-DIR) method using unsupervised learning to establish correspondence between baseline pre-operative and follow-up MRI scans of patients with brain glioma. METHODS: This study involved multi-parametric brain MRI scans (T1, T1-contrast enhanced, T2, FLAIR) acquired at pre-operative and follow-up time for 160 patients diagnosed with glioma, representing the BraTS-Reg 2022 challenge dataset. ConvUNet-DIR, a deep learning-based deformable registration workflow using 3D U-Net style architecture as a core, was developed to establish correspondence between the MRI scans. The workflow consists of three components: (1) the U-Net learns features from pairs of MRI scans and estimates a mapping between them, (2) the grid generator computes the sampling grid based on the derived transformation parameters, and (3) the spatial transformation layer generates a warped image by applying the sampling operation using interpolation. A similarity measure was used as a loss function for the network with a regularization parameter limiting the deformation. The model was trained via unsupervised learning using pairs of MRI scans on a training data set (n = 102) and validated on a validation data set (n = 26) to assess its generalizability. Its performance was evaluated on a test set (n = 32) by computing the Dice score and structural similarity index (SSIM) quantitative metrics. The model's performance also was compared with the baseline state-of-the-art VoxelMorph (VM1 and VM2) learning-based algorithms. RESULTS: The ConvUNet-DIR model showed promising competency in performing accurate 3D deformable registration. It achieved a mean Dice score of 0.975 ± 0.003 and SSIM of 0.908 ± 0.011 on the test set (n = 32). Experimental results also demonstrated that ConvUNet-DIR outperformed the VoxelMorph algorithms concerning Dice (VM1: 0.969 ± 0.006 and VM2: 0.957 ± 0.008) and SSIM (VM1: 0.893 ± 0.012 and VM2: 0.857 ± 0.017) metrics. The time required to perform a registration for a pair of MRI scans is about 1 s on the CPU. CONCLUSIONS: The developed deep learning-based model can perform an end-to-end deformable registration of a pair of 3D MRI scans for glioma patients without human intervention. The model could provide accurate, efficient, and robust deformable registration without needing pre-alignment and labeling. It outperformed the state-of-the-art VoxelMorph learning-based deformable registration algorithms and other supervised/unsupervised deep learning-based methods reported in the literature.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Glioma , Imageamento por Ressonância Magnética , Aprendizado de Máquina não Supervisionado , Humanos , Imageamento por Ressonância Magnética/métodos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/radioterapia , Glioma/diagnóstico por imagem , Glioma/radioterapia , Glioma/patologia , Radioterapia (Especialidade)/métodos , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos
2.
J Appl Clin Med Phys ; 24(12): e14120, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37552487

RESUMO

Recent studies have raised broad safety and health concerns about using of gadolinium contrast agents during magnetic resonance imaging (MRI) to enhance identification of active tumors. In this paper, we developed a deep learning-based method for three-dimensional (3D) contrast-enhanced T1-weighted (T1) image synthesis from contrast-free image(s). The MR images of 1251 patients with glioma from the RSNA-ASNR-MICCAI BraTS Challenge 2021 dataset were used in this study. A 3D dense-dilated residual U-Net (DD-Res U-Net) was developed for contrast-enhanced T1 image synthesis from contrast-free image(s). The model was trained on a randomly split training set (n = 800) using a customized loss function and validated on a validation set (n = 200) to improve its generalizability. The generated images were quantitatively assessed against the ground-truth on a test set (n = 251) using the mean absolute error (MAE), mean-squared error (MSE), peak signal-to-noise ratio (PSNR), structural similarity (SSIM), normalized mutual information (NMI), and Hausdorff distance (HDD) metrics. We also performed a qualitative visual similarity assessment between the synthetic and ground-truth images. The effectiveness of the proposed model was compared with a 3D U-Net baseline model and existing deep learning-based methods in the literature. Our proposed DD-Res U-Net model achieved promising performance for contrast-enhanced T1 synthesis in both quantitative metrics and perceptual evaluation on the test set (n = 251). Analysis of results on the whole brain region showed a PSNR (in dB) of 29.882 ± 5.924, a SSIM of 0.901 ± 0.071, a MAE of 0.018 ± 0.013, a MSE of 0.002 ± 0.002, a HDD of 2.329 ± 9.623, and a NMI of 1.352 ± 0.091 when using only T1 as input; and a PSNR (in dB) of 30.284 ± 4.934, a SSIM of 0.915 ± 0.063, a MAE of 0.017 ± 0.013, a MSE of 0.001 ± 0.002, a HDD of 1.323 ± 3.551, and a NMI of 1.364 ± 0.089 when combining T1 with other MRI sequences. Compared to the U-Net baseline model, our model revealed superior performance. Our model demonstrated excellent capability in generating synthetic contrast-enhanced T1 images from contrast-free MR image(s) of the whole brain region when using multiple contrast-free images as input. Without incorporating tumor mask information during network training, its performance was inferior in the tumor regions compared to the whole brain which requires further improvements to replace the gadolinium administration in neuro-oncology.


Assuntos
Gadolínio , Neoplasias , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Encéfalo
3.
J Appl Clin Med Phys ; 24(9): e14015, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37138549

RESUMO

PURPOSE: In this paper, we compare four novel knowledge-based planning (KBP) algorithms using deep learning to predict three-dimensional (3D) dose distributions of head and neck plans using the same patients' dataset and quantitative assessment metrics. METHODS: A dataset of 340 oropharyngeal cancer patients treated with intensity-modulated radiation therapy was used in this study, which represents the AAPM OpenKBP - 2020 Grand Challenge dataset. Four 3D convolutional neural network architectures were built. The models were trained on 64% of the data set and validated on 16% for voxel-wise dose predictions: U-Net, attention U-Net, residual U-Net (Res U-Net), and attention Res U-Net. The trained models were then evaluated for their performance on a test data set (20% of the data) by comparing the predicted dose distributions against the ground-truth using dose statistics and dose-volume indices. RESULTS: The four KBP dose prediction models exhibited promising performance with an averaged mean absolute dose error within the body contour <3 Gy on 68 plans in the test set. The average difference in predicting the D99 index for all targets was 0.92 Gy (p = 0.51) for attention Res U-Net, 0.94 Gy (p = 0.40) for Res U-Net, 2.94 Gy (p = 0.09) for attention U-Net, and 3.51 Gy (p = 0.08) for U-Net. For the OARs, the values for the D m a x ${D_{max}}$ and D m e a n ${D_{mean}}$ indices were 2.72 Gy (p < 0.01) for attention Res U-Net, 2.94 Gy (p < 0.01) for Res U-Net, 1.10 Gy (p < 0.01) for attention U-Net, 0.84 Gy (p < 0.29) for U-Net. CONCLUSION: All models demonstrated almost comparable performance for voxel-wise dose prediction. KBP models that employ 3D U-Net architecture as a base could be deployed for clinical use to improve cancer patient treatment by creating plans with consistent quality and making the radiotherapy workflow more efficient.


Assuntos
Aprendizado Profundo , Radioterapia de Intensidade Modulada , Humanos , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador/métodos , Pescoço , Cabeça , Radioterapia de Intensidade Modulada/métodos , Órgãos em Risco
4.
J Appl Clin Med Phys ; 23(7): e13630, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35533234

RESUMO

PURPOSE: Deep learning-based knowledge-based planning (KBP) methods have been introduced for radiotherapy dose distribution prediction to reduce the planning time and maintain consistent high-quality plans. This paper presents a novel KBP model using an attention-gating mechanism and a three-dimensional (3D) U-Net for intensity-modulated radiation therapy (IMRT) 3D dose distribution prediction in head-and-neck cancer. METHODS: A total of 340 head-and-neck cancer plans, representing the OpenKBP-2020 AAPM Grand Challenge data set, were used in this study. All patients were treated with the IMRT technique and a dose prescription of 70 Gy. The data set was randomly divided into 64%/16%/20% as training/validation/testing cohorts. An attention-gated 3D U-Net architecture model was developed to predict full 3D dose distribution. The developed model was trained using the mean-squared error loss function, Adam optimization algorithm, a learning rate of 0.001, 120 epochs, and batch size of 4. In addition, a baseline U-Net model was also similarly trained for comparison. The model performance was evaluated on the testing data set by comparing the generated dose distributions against the ground-truth dose distributions using dose statistics and clinical dosimetric indices. Its performance was also compared to the baseline model and the reported results of other deep learning-based dose prediction models. RESULTS: The proposed attention-gated 3D U-Net model showed high capability in accurately predicting 3D dose distributions that closely replicated the ground-truth dose distributions of 68 plans in the test set. The average value of the mean absolute dose error was 2.972 ± 1.220 Gy (vs. 2.920 ± 1.476 Gy for a baseline U-Net) in the brainstem, 4.243 ± 1.791 Gy (vs. 4.530 ± 2.295 Gy for a baseline U-Net) in the left parotid, 4.622 ± 1.975 Gy (vs. 4.223 ± 1.816 Gy for a baseline U-Net) in the right parotid, 3.346 ± 1.198 Gy (vs. 2.958 ± 0.888 Gy for a baseline U-Net) in the spinal cord, 6.582 ± 3.748 Gy (vs. 5.114 ± 2.098 Gy for a baseline U-Net) in the esophagus, 4.756 ± 1.560 Gy (vs. 4.992 ± 2.030 Gy for a baseline U-Net) in the mandible, 4.501 ± 1.784 Gy (vs. 4.925 ± 2.347 Gy for a baseline U-Net) in the larynx, 2.494 ± 0.953 Gy (vs. 2.648 ± 1.247 Gy for a baseline U-Net) in the PTV_70, and 2.432 ± 2.272 Gy (vs. 2.811 ± 2.896 Gy for a baseline U-Net) in the body contour. The average difference in predicting the D99 value for the targets (PTV_70, PTV_63, and PTV_56) was 2.50 ± 1.77 Gy. For the organs at risk, the average difference in predicting the D m a x ${D_{max}}$ (brainstem, spinal cord, and mandible) and D m e a n ${D_{mean}}$ (left parotid, right parotid, esophagus, and larynx) values was 1.43 ± 1.01 and 2.44 ± 1.73 Gy, respectively. The average value of the homogeneity index was 7.99 ± 1.45 for the predicted plans versus 5.74 ± 2.95 for the ground-truth plans, whereas the average value of the conformity index was 0.63 ± 0.17 for the predicted plans versus 0.89 ± 0.19 for the ground-truth plans. The proposed model needs less than 5 s to predict a full 3D dose distribution of 64 × 64 × 64 voxels for a new patient that is sufficient for real-time applications. CONCLUSIONS: The attention-gated 3D U-Net model demonstrated a capability in predicting accurate 3D dose distributions for head-and-neck IMRT plans with consistent quality. The prediction performance of the proposed model was overall superior to a baseline standard U-Net model, and it was also competitive to the performance of the best state-of-the-art dose prediction method reported in the literature. The proposed model could be used to obtain dose distributions for decision-making before planning, quality assurance of planning, and guiding-automated planning for improved plan consistency, quality, and planning efficiency.


Assuntos
Neoplasias de Cabeça e Pescoço , Radioterapia de Intensidade Modulada , Atenção , Neoplasias de Cabeça e Pescoço/radioterapia , Humanos , Redes Neurais de Computação , Órgãos em Risco , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador/métodos , Radioterapia de Intensidade Modulada/métodos
5.
J Appl Clin Med Phys ; 23(4): e13530, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35044073

RESUMO

PURPOSE: The existence of multicontrast magnetic resonance (MR) images increases the level of clinical information available for the diagnosis and treatment of brain cancer patients. However, acquiring the complete set of multicontrast MR images is not always practically feasible. In this study, we developed a state-of-the-art deep learning convolutional neural network (CNN) for image-to-image translation across three standards MRI contrasts for the brain. METHODS: BRATS'2018 MRI dataset of 477 patients clinically diagnosed with glioma brain cancer was used in this study, with each patient having T1-weighted (T1), T2-weighted (T2), and FLAIR contrasts. It was randomly split into 64%, 16%, and 20% as training, validation, and test set, respectively. We developed a U-Net model to learn the nonlinear mapping of a source image contrast to a target image contrast across three MRI contrasts. The model was trained and validated with 2D paired MR images using a mean-squared error (MSE) cost function, Adam optimizer with 0.001 learning rate, and 120 epochs with a batch size of 32. The generated synthetic-MR images were evaluated against the ground-truth images by computing the MSE, mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM). RESULTS: The generated synthetic-MR images with our model were nearly indistinguishable from the real images on the testing dataset for all translations, except synthetic FLAIR images had slightly lower quality and exhibited loss of details. The range of average PSNR, MSE, MAE, and SSIM values over the six translations were 29.44-33.25 dB, 0.0005-0.0012, 0.0086-0.0149, and 0.932-0.946, respectively. Our results were as good as the best-reported results by other deep learning models on BRATS datasets. CONCLUSIONS: Our U-Net model exhibited that it can accurately perform image-to-image translation across brain MRI contrasts. It could hold great promise for clinical use for improved clinical decision-making and better diagnosis of brain cancer patients due to the availability of multicontrast MRIs. This approach may be clinically relevant and setting a significant step to efficiently fill a gap of absent MR sequences without additional scanning.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Encéfalo/diagnóstico por imagem , Neoplasias Encefálicas/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA