Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Appl Clin Med Phys ; 22(1): 308-317, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33410568

RESUMO

PURPOSE: To evaluate the dosimetric and image-guided radiation therapy (IGRT) performance of a novel generative adversarial network (GAN) generated synthetic CT (synCT) in the brain and compare its performance for clinical use including conventional brain radiotherapy, cranial stereotactic radiosurgery (SRS), planar, and volumetric IGRT. METHODS AND MATERIALS: SynCT images for 12 brain cancer patients (6 SRS, 6 conventional) were generated from T1-weighted postgadolinium magnetic resonance (MR) images by applying a GAN model with a residual network (ResNet) generator and a convolutional neural network (CNN) with 5 convolutional layers as the discriminator that classified input images as real or synthetic. Following rigid registration, clinical structures and treatment plans derived from simulation CT (simCT) images were transferred to synCTs. Dose was recalculated for 15 simCT/synCT plan pairs using fixed monitor units. Two-dimensional (2D) gamma analysis (2%/2 mm, 1%/1 mm) was performed to compare dose distributions at isocenter. Dose-volume histogram (DVH) metrics (D95% , D99% , D0.2cc, and D0.035cc ) were assessed for the targets and organ at risks (OARs). IGRT performance was evaluated via volumetric registration between cone beam CT (CBCT) to synCT/simCT and planar registration between KV images to synCT/simCT digital reconstructed radiographs (DRRs). RESULTS: Average gamma passing rates at 1%/1mm and 2%/2mm were 99.0 ± 1.5% and 99.9 ± 0.2%, respectively. Excellent agreement in DVH metrics was observed (mean difference ≤0.10 ± 0.04 Gy for targets, 0.13 ± 0.04 Gy for OARs). The population averaged mean difference in CBCT-synCT registrations were <0.2 mm and 0.1 degree different from simCT-based registrations. The mean difference between kV-synCT DRR and kV-simCT DRR registrations was <0.5 mm with no statistically significant differences observed (P > 0.05). An outlier with a large resection cavity exhibited the worst-case scenario. CONCLUSION: Brain GAN synCTs demonstrated excellent performance for dosimetric and IGRT endpoints, offering potential use in high precision brain cancer therapy.


Assuntos
Aprendizado Profundo , Radioterapia Guiada por Imagem , Encéfalo/diagnóstico por imagem , Encéfalo/cirurgia , Humanos , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador
2.
Artigo em Inglês | MEDLINE | ID: mdl-34094039

RESUMO

Recently, interest in MR-only treatment planning using synthetic CTs (synCTs) has grown rapidly in radiation therapy. However, developing class solutions for medical images that contain atypical anatomy remains a major limitation. In this paper, we propose a novel spatial attention-guided generative adversarial network (attention-GAN) model to generate accurate synCTs using T1-weighted MRI images as the input to address atypical anatomy. Experimental results on fifteen brain cancer patients show that attention-GAN outperformed existing synCT models and achieved an average MAE of 85.223±12.08, 232.41±60.86, 246.38±42.67 Hounsfield units between synCT and CT-SIM across the entire head, bone and air regions, respectively. Qualitative analysis shows that attention-GAN has the ability to use spatially focused areas to better handle outliers, areas with complex anatomy or post-surgical regions, and thus offer strong potential for supporting near real-time MR-only treatment planning.

3.
Med Phys ; 2018 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-29901223

RESUMO

PURPOSE: While MR-only treatment planning using synthetic CTs (synCTs) offers potential for streamlining clinical workflow, a need exists for an efficient and automated synCT generation in the brain to facilitate near real-time MR-only planning. This work describes a novel method for generating brain synCTs based on generative adversarial networks (GANs), a deep learning model that trains two competing networks simultaneously, and compares it to a deep convolutional neural network (CNN). METHODS: Post-Gadolinium T1-Weighted and CT-SIM images from fifteen brain cancer patients were retrospectively analyzed. The GAN model was developed to generate synCTs using T1-weighted MRI images as the input using a residual network (ResNet) as the generator. The discriminator is a CNN with five convolutional layers that classified the input image as real or synthetic. Fivefold cross-validation was performed to validate our model. GAN performance was compared to CNN based on mean absolute error (MAE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) metrics between the synCT and CT images. RESULTS: GAN training took ~11 h with a new case testing time of 5.7 ± 0.6 s. For GAN, MAEs between synCT and CT-SIM were 89.3 ± 10.3 Hounsfield units (HU) and 41.9 ± 8.6 HU across the entire FOV and tissues, respectively. However, MAE in the bone and air was, on average, ~240-255 HU. By comparison, the CNN model had an average full FOV MAE of 102.4 ± 11.1 HU. For GAN, the mean PSNR was 26.6 ± 1.2 and SSIM was 0.83 ± 0.03. GAN synCTs preserved details better than CNN, and regions of abnormal anatomy were well represented on GAN synCTs. CONCLUSIONS: We developed and validated a GAN model using a single T1-weighted MR image as the input that generates robust, high quality synCTs in seconds. Our method offers strong potential for supporting near real-time MR-only treatment planning in the brain.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA