Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 111
Filtrar
1.
ArXiv ; 2024 May 04.
Artículo en Inglés | MEDLINE | ID: mdl-38745706

RESUMEN

Background: Stereotactic body radiotherapy (SBRT) is a well-established treatment modality for liver metastases in patients unsuitable for surgery. Both CT and MRI are useful during treatment planning for accurate target delineation and to reduce potential organs-at-risk (OAR) toxicity from radiation. MRI-CT deformable image registration (DIR) is required to propagate the contours defined on high-contrast MRI to CT images. An accurate DIR method could lead to more precisely defined treatment volumes and superior OAR sparing on the treatment plan. Therefore, it is beneficial to develop an accurate MRI-CT DIR for liver SBRT. Purpose: To create a new deep learning model that can estimate the deformation vector field (DVF) for directly registering abdominal MRI-CT images. Methods: The proposed method assumed a diffeomorphic deformation. By using topology-preserved deformation features extracted from the probabilistic diffeomorphic registration model, abdominal motion can be accurately obtained and utilized for DVF estimation. The model integrated Swin transformers, which have demonstrated superior performance in motion tracking, into the convolutional neural network (CNN) for deformation feature extraction. The model was optimized using a cross-modality image similarity loss and a surface matching loss. To compute the image loss, a modality-independent neighborhood descriptor (MIND) was used between the deformed MRI and CT images. The surface matching loss was determined by measuring the distance between the warped coordinates of the surfaces of contoured structures on the MRI and CT images. To evaluate the performance of the model, a retrospective study was carried out on a group of 50 liver cases that underwent rigid registration of MRI and CT scans. The deformed MRI image was assessed against the CT image using the target registration error (TRE), Dice similarity coefficient (DSC), and mean surface distance (MSD) between the deformed contours of the MRI image and manual contours of the CT image. Results: When compared to only rigid registration, DIR with the proposed method resulted in an increase of the mean DSC values of the liver and portal vein from 0.850±0.102 and 0.628±0.129 to 0.903±0.044 and 0.763±0.073, a decrease of the mean MSD of the liver from 7.216±4.513 mm to 3.232±1.483 mm, and a decrease of the TRE from 26.238±2.769 mm to 8.492±1.058 mm. Conclusion: The proposed DIR method based on a diffeomorphic transformer provides an effective and efficient way to generate an accurate DVF from an MRI-CT image pair of the abdomen. It could be utilized in the current treatment planning workflow for liver SBRT.

2.
Med Phys ; 2024 Apr 17.
Artículo en Inglés | MEDLINE | ID: mdl-38630982

RESUMEN

BACKGROUND: 7 Tesla (7T) apparent diffusion coefficient (ADC) maps derived from diffusion-weighted imaging (DWI) demonstrate improved image quality and spatial resolution over 3 Tesla (3T) ADC maps. However, 7T magnetic resonance imaging (MRI) currently suffers from limited clinical unavailability, higher cost, and increased susceptibility to artifacts. PURPOSE: To address these issues, we propose a hybrid CNN-transformer model to synthesize high-resolution 7T ADC maps from multimodal 3T MRI. METHODS: The Vision CNN-Transformer (VCT), composed of both Vision Transformer (ViT) blocks and convolutional layers, is proposed to produce high-resolution synthetic 7T ADC maps from 3T ADC maps and 3T T1-weighted (T1w) MRI. ViT blocks enabled global image context while convolutional layers efficiently captured fine detail. The VCT model was validated on the publicly available Human Connectome Project Young Adult dataset, comprising 3T T1w, 3T DWI, and 7T DWI brain scans. The Diffusion Imaging in Python library was used to compute ADC maps from the DWI scans. A total of 171 patient cases were randomly divided into 130 training cases, 20 validation cases, and 21 test cases. The synthetic ADC maps were evaluated by comparing their similarity to the ground truth volumes with the following metrics: peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and mean squared error (MSE). In addition, RESULTS: The results are as follows: PSNR: 27.0 ± 0.9 dB, SSIM: 0.945 ± 0.010, and MSE: 2.0E-3 ± 0.4E-3. Both qualitative and quantitative results demonstrate that VCT performs favorably against other state-of-the-art methods. We have introduced various efficiency improvements, including the implementation of flash attention and training on 176×208 resolution images. These enhancements have resulted in the reduction of parameters and training time per epoch by 50% in comparison to ResViT. Specifically, the training time per epoch has been shortened from 7.67 min to 3.86 min. CONCLUSION: We propose a novel method to predict high-resolution 7T ADC maps from low-resolution 3T ADC maps and T1w MRI. Our predicted images demonstrate better spatial resolution and contrast compared to 3T MRI and prediction results made by ResViT and pix2pix. These high-quality synthetic 7T MR images could be beneficial for disease diagnosis and intervention, producing higher resolution and conformal contours, and as an intermediate step in generating synthetic CT for radiation therapy, especially when 7T MRI scanners are unavailable.

3.
Cell Rep Med ; 5(4): 101486, 2024 Apr 16.
Artículo en Inglés | MEDLINE | ID: mdl-38631288

RESUMEN

PET scans provide additional clinical value but are costly and not universally accessible. Salehjahromi et al.1 developed an AI-based pipeline to synthesize PET images from diagnostic CT scans, demonstrating its potential clinical utility across various clinical tasks for lung cancer.


Asunto(s)
Neoplasias Pulmonares , Humanos , Fluorodesoxiglucosa F18 , Tomografía Computarizada por Rayos X/métodos , Pronóstico , Inteligencia Artificial
4.
Med Phys ; 2024 Apr 08.
Artículo en Inglés | MEDLINE | ID: mdl-38588512

RESUMEN

PURPOSE: Positron Emission Tomography (PET) has been a commonly used imaging modality in broad clinical applications. One of the most important tradeoffs in PET imaging is between image quality and radiation dose: high image quality comes with high radiation exposure. Improving image quality is desirable for all clinical applications while minimizing radiation exposure is needed to reduce risk to patients. METHODS: We introduce PET Consistency Model (PET-CM), an efficient diffusion-based method for generating high-quality full-dose PET images from low-dose PET images. It employs a two-step process, adding Gaussian noise to full-dose PET images in the forward diffusion, and then denoising them using a PET Shifted-window Vision Transformer (PET-VIT) network in the reverse diffusion. The PET-VIT network learns a consistency function that enables direct denoising of Gaussian noise into clean full-dose PET images. PET-CM achieves state-of-the-art image quality while requiring significantly less computation time than other methods. Evaluation with normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), multi-scale structure similarity index (SSIM), normalized cross-correlation (NCC), and clinical evaluation including Human Ranking Score (HRS) and Standardized Uptake Value (SUV) Error analysis shows its superiority in synthesizing full-dose PET images from low-dose inputs. RESULTS: In experiments comparing eighth-dose to full-dose images, PET-CM demonstrated impressive performance with NMAE of 1.278 ± 0.122%, PSNR of 33.783 ± 0.824 dB, SSIM of 0.964 ± 0.009, NCC of 0.968 ± 0.011, HRS of 4.543, and SUV Error of 0.255 ± 0.318%, with an average generation time of 62 s per patient. This is a significant improvement compared to the state-of-the-art diffusion-based model with PET-CM reaching this result 12× faster. Similarly, in the quarter-dose to full-dose image experiments, PET-CM delivered competitive outcomes, achieving an NMAE of 0.973 ± 0.066%, PSNR of 36.172 ± 0.801 dB, SSIM of 0.984 ± 0.004, NCC of 0.990 ± 0.005, HRS of 4.428, and SUV Error of 0.151 ± 0.192% using the same generation process, which underlining its high quantitative and clinical precision in both denoising scenario. CONCLUSIONS: We propose PET-CM, the first efficient diffusion-model-based method, for estimating full-dose PET images from low-dose images. PET-CM provides comparable quality to the state-of-the-art diffusion model with higher efficiency. By utilizing this approach, it becomes possible to maintain high-quality PET images suitable for clinical use while mitigating the risks associated with radiation. The code is availble at https://github.com/shaoyanpan/Full-dose-Whole-body-PET-Synthesis-from-Low-dose-PET-Using-Consistency-Model.

5.
J Appl Clin Med Phys ; 25(2): e14155, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37712893

RESUMEN

Recent advances in MRI-guided radiation therapy (MRgRT) and deep learning techniques encourage fully adaptive radiation therapy (ART), real-time MRI monitoring, and the MRI-only treatment planning workflow. Given the rapid growth and emergence of new state-of-the-art methods in these fields, we systematically review 197 studies written on or before December 31, 2022, and categorize the studies into the areas of image segmentation, image synthesis, radiomics, and real time MRI. Building from the underlying deep learning methods, we discuss their clinical importance and current challenges in facilitating small tumor segmentation, accurate x-ray attenuation information from MRI, tumor characterization and prognosis, and tumor motion tracking. In particular, we highlight the recent trends in deep learning such as the emergence of multi-modal, visual transformer, and diffusion models.


Asunto(s)
Aprendizaje Profundo , Neoplasias , Humanos , Imagen por Resonancia Magnética/métodos , Neoplasias/diagnóstico por imagen , Neoplasias/radioterapia
6.
Med Phys ; 51(3): 1974-1984, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37708440

RESUMEN

BACKGROUND: An automated, accurate, and efficient lung four-dimensional computed tomography (4DCT) image registration method is clinically important to quantify respiratory motion for optimal motion management. PURPOSE: The purpose of this work is to develop a weakly supervised deep learning method for 4DCT lung deformable image registration (DIR). METHODS: The landmark-driven cycle network is proposed as a deep learning platform that performs DIR of individual phase datasets in a simulation 4DCT. This proposed network comprises a generator and a discriminator. The generator accepts moving and target CTs as input and outputs the deformation vector fields (DVFs) to match the two CTs. It is optimized during both forward and backward paths to enhance the bi-directionality of DVF generation. Further, the landmarks are used to weakly supervise the generator network. Landmark-driven loss is used to guide the generator's training. The discriminator then judges the realism of the deformed CT to provide extra DVF regularization. RESULTS: We performed four-fold cross-validation on 10 4DCT datasets from the public DIR-Lab dataset and a hold-out test on our clinic dataset, which included 50 4DCT datasets. The DIR-Lab dataset was used to evaluate the performance of the proposed method against other methods in the literature by calculating the DIR-Lab Target Registration Error (TRE). The proposed method outperformed other deep learning-based methods on the DIR-Lab datasets in terms of TRE. Bi-directional and landmark-driven loss were shown to be effective for obtaining high registration accuracy. The mean and standard deviation of TRE for the DIR-Lab datasets was 1.20 ± 0.72 mm and the mean absolute error (MAE) and structural similarity index (SSIM) for our datasets were 32.1 ± 11.6 HU and 0.979 ± 0.011, respectively. CONCLUSION: The landmark-driven cycle network has been validated and tested for automatic deformable image registration of patients' lung 4DCTs with results comparable to or better than competing methods.


Asunto(s)
Tomografía Computarizada Cuatridimensional , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Pulmón/diagnóstico por imagen , Simulación por Computador , Movimiento (Física) , Algoritmos
7.
Med Phys ; 51(3): 1847-1859, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37646491

RESUMEN

BACKGROUND: Daily or weekly cone-beam computed tomography (CBCT) scans are commonly used for accurate patient positioning during the image-guided radiotherapy (IGRT) process, making it an ideal option for adaptive radiotherapy (ART) replanning. However, the presence of severe artifacts and inaccurate Hounsfield unit (HU) values prevent its use for quantitative applications such as organ segmentation and dose calculation. To enable the clinical practice of online ART, it is crucial to obtain CBCT scans with a quality comparable to that of a CT scan. PURPOSE: This work aims to develop a conditional diffusion model to perform image translation from the CBCT to the CT distribution for the image quality improvement of CBCT. METHODS: The proposed method is a conditional denoising diffusion probabilistic model (DDPM) that utilizes a time-embedded U-net architecture with residual and attention blocks to gradually transform the white Gaussian noise sample to the target CT distribution conditioned on the CBCT. The model was trained on deformed planning CT (dpCT) and CBCT image pairs, and its feasibility was verified in brain patient study and head-and-neck (H&N) patient study. The performance of the proposed algorithm was evaluated using mean absolute error (MAE), peak signal-to-noise ratio (PSNR) and normalized cross-correlation (NCC) metrics on generated synthetic CT (sCT) samples. The proposed method was also compared to four other diffusion model-based sCT generation methods. RESULTS: In the brain patient study, the MAE, PSNR, and NCC of the generated sCT were 25.99 HU, 30.49 dB, and 0.99, respectively, compared to 40.63 HU, 27.87 dB, and 0.98 of the CBCT images. In the H&N patient study, the metrics were 32.56 HU, 27.65 dB, 0.98 and 38.99 HU, 27.00, 0.98 for sCT and CBCT, respectively. Compared to the other four diffusion models and one Cycle generative adversarial network (Cycle GAN), the proposed method showed superior results in both visual quality and quantitative analysis. CONCLUSIONS: The proposed conditional DDPM method can generate sCT from CBCT with accurate HU numbers and reduced artifacts, enabling accurate CBCT-based organ segmentation and dose calculation for online ART.


Asunto(s)
Bisacodilo/análogos & derivados , Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada de Haz Cónico Espiral , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada de Haz Cónico , Tomografía Computarizada por Rayos X , Modelos Estadísticos , Planificación de la Radioterapia Asistida por Computador/métodos
8.
Med Phys ; 51(4): 2538-2548, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38011588

RESUMEN

BACKGROUND AND PURPOSE: Magnetic resonance imaging (MRI)-based synthetic computed tomography (sCT) simplifies radiation therapy treatment planning by eliminating the need for CT simulation and error-prone image registration, ultimately reducing patient radiation dose and setup uncertainty. In this work, we propose a MRI-to-CT transformer-based improved denoising diffusion probabilistic model (MC-IDDPM) to translate MRI into high-quality sCT to facilitate radiation treatment planning. METHODS: MC-IDDPM implements diffusion processes with a shifted-window transformer network to generate sCT from MRI. The proposed model consists of two processes: a forward process, which involves adding Gaussian noise to real CT scans to create noisy images, and a reverse process, in which a shifted-window transformer V-net (Swin-Vnet) denoises the noisy CT scans conditioned on the MRI from the same patient to produce noise-free CT scans. With an optimally trained Swin-Vnet, the reverse diffusion process was used to generate noise-free sCT scans matching MRI anatomy. We evaluated the proposed method by generating sCT from MRI on an institutional brain dataset and an institutional prostate dataset. Quantitative evaluations were conducted using several metrics, including Mean Absolute Error (MAE), Peak Signal-to-Noise Ratio (PSNR), Multi-scale Structure Similarity Index (SSIM), and Normalized Cross Correlation (NCC). Dosimetry analyses were also performed, including comparisons of mean dose and target dose coverages for 95% and 99%. RESULTS: MC-IDDPM generated brain sCTs with state-of-the-art quantitative results with MAE 48.825 ± 21.491 HU, PSNR 26.491 ± 2.814 dB, SSIM 0.947 ± 0.032, and NCC 0.976 ± 0.019. For the prostate dataset: MAE 55.124 ± 9.414 HU, PSNR 28.708 ± 2.112 dB, SSIM 0.878 ± 0.040, and NCC 0.940 ± 0.039. MC-IDDPM demonstrates a statistically significant improvement (with p < 0.05) in most metrics when compared to competing networks, for both brain and prostate synthetic CT. Dosimetry analyses indicated that the target dose coverage differences by using CT and sCT were within ± 0.34%. CONCLUSIONS: We have developed and validated a novel approach for generating CT images from routine MRIs using a transformer-based improved DDPM. This model effectively captures the complex relationship between CT and MRI images, allowing for robust and high-quality synthetic CT images to be generated in a matter of minutes. This approach has the potential to greatly simplify the treatment planning process for radiation therapy by eliminating the need for additional CT scans, reducing the amount of time patients spend in treatment planning, and enhancing the accuracy of treatment delivery.


Asunto(s)
Cabeza , Tomografía Computarizada por Rayos X , Masculino , Humanos , Tomografía Computarizada por Rayos X/métodos , Imagen por Resonancia Magnética/métodos , Planificación de la Radioterapia Asistida por Computador/métodos , Radiometría , Procesamiento de Imagen Asistido por Computador/métodos
9.
Front Oncol ; 13: 1274803, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38156106

RESUMEN

Background and purpose: A novel radiotracer, 18F-fluciclovine (anti-3-18F-FACBC), has been demonstrated to be associated with significantly improved survival when it is used in PET/CT imaging to guide postprostatectomy salvage radiotherapy for prostate cancer. We aimed to investigate the feasibility of using a deep learning method to automatically detect and segment lesions on 18F-fluciclovine PET/CT images. Materials and methods: We retrospectively identified 84 patients who are enrolled in Arm B of the Emory Molecular Prostate Imaging for Radiotherapy Enhancement (EMPIRE-1) trial. All 84 patients had prostate adenocarcinoma and underwent prostatectomy and 18F-fluciclovine PET/CT imaging with lesions identified and delineated by physicians. Three different neural networks with increasing levels of complexity (U-net, Cascaded U-net, and a cascaded detection segmentation network) were trained and tested on the 84 patients with a fivefold cross-validation strategy and a hold-out test, using manual contours as the ground truth. We also investigated using both PET and CT or using PET only as input to the neural network. Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95), center-of-mass distance (CMD), and volume difference (VD) were used to quantify the quality of segmentation results against ground truth contours provided by physicians. Results: All three deep learning methods were able to detect 144/155 lesions and 153/155 lesions successfully when PET+CT and PET only, respectively, served as input. Quantitative results demonstrated that the neural network with the best performance was able to segment lesions with an average DSC of 0.68 ± 0.15 and HD95 of 4 ± 2 mm. The center of mass of the segmented contours deviated from physician contours by approximately 2 mm on average, and the volume difference was less than 1 cc. The novel network proposed by us achieves the best performance compared to current networks. The addition of CT as input to the neural network contributed to more cases of failure (DSC = 0), and among those cases of DSC > 0, it was shown to produce no statistically significant difference with the use of only PET as input for our proposed method. Conclusion: Quantitative results demonstrated the feasibility of the deep learning methods in automatically segmenting lesions on 18F-fluciclovine PET/CT images. This indicates the great potential of 18F-fluciclovine PET/CT combined with deep learning for providing a second check in identifying lesions as well as saving time and effort for physicians in contouring.

10.
Phys Med Biol ; 68(23)2023 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-37972414

RESUMEN

The hippocampus plays a crucial role in memory and cognition. Because of the associated toxicity from whole brain radiotherapy, more advanced treatment planning techniques prioritize hippocampal avoidance, which depends on an accurate segmentation of the small and complexly shaped hippocampus. To achieve accurate segmentation of the anterior and posterior regions of the hippocampus from T1 weighted (T1w) MR images, we developed a novel model, Hippo-Net, which uses a cascaded model strategy. The proposed model consists of two major parts: (1) a localization model is used to detect the volume-of-interest (VOI) of hippocampus. (2) An end-to-end morphological vision transformer network (Franchietal2020Pattern Recognit.102107246, Ranemetal2022 IEEE/CVF Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW) pp 3710-3719) is used to perform substructures segmentation within the hippocampus VOI. The substructures include the anterior and posterior regions of the hippocampus, which are defined as the hippocampus proper and parts of the subiculum. The vision transformer incorporates the dominant features extracted from MR images, which are further improved by learning-based morphological operators. The integration of these morphological operators into the vision transformer increases the accuracy and ability to separate hippocampus structure into its two distinct substructures. A total of 260 T1w MRI datasets from medical segmentation decathlon dataset were used in this study. We conducted a five-fold cross-validation on the first 200 T1w MR images and then performed a hold-out test on the remaining 60 T1w MR images with the model trained on the first 200 images. In five-fold cross-validation, the Dice similarity coefficients were 0.900 ± 0.029 and 0.886 ± 0.031 for the hippocampus proper and parts of the subiculum, respectively. The mean surface distances (MSDs) were 0.426 ± 0.115 mm and 0.401 ± 0.100 mm for the hippocampus proper and parts of the subiculum, respectively. The proposed method showed great promise in automatically delineating hippocampus substructures on T1w MR images. It may facilitate the current clinical workflow and reduce the physicians' effort.


Asunto(s)
Hipocampo , Imagen por Resonancia Magnética , Imagen por Resonancia Magnética/métodos , Hipocampo/diagnóstico por imagen , Inteligencia Artificial , Procesamiento de Imagen Asistido por Computador/métodos
11.
ArXiv ; 2023 Nov 17.
Artículo en Inglés | MEDLINE | ID: mdl-38013889

RESUMEN

BACKGROUND: Dual-energy CT (DECT) and material decomposition play vital roles in quantitative medical imaging. However, the decomposition process may suffer from significant noise amplification, leading to severely degraded image signal-to-noise ratios (SNRs). While existing iterative algorithms perform noise suppression using different image priors, these heuristic image priors cannot accurately represent the features of the target image manifold. Although deep learning-based decomposition methods have been reported, these methods are in the supervised-learning framework requiring paired data for training, which is not readily available in clinical settings. PURPOSE: This work aims to develop an unsupervised-learning framework with data-measurement consistency for image-domain material decomposition in DECT.

12.
ArXiv ; 2023 Aug 02.
Artículo en Inglés | MEDLINE | ID: mdl-37576122

RESUMEN

Dual-energy computed tomography (DECT) is a promising technology that has shown a number of clinical advantages over conventional X-ray CT, such as improved material identification, artifact suppression, etc. For proton therapy treatment planning, besides material-selective images, maps of effective atomic number (Z) and relative electron density to that of water ($\rho_e$) can also be achieved and further employed to improve stopping power ratio accuracy and reduce range uncertainty. In this work, we propose a one-step iterative estimation method, which employs multi-domain gradient $L_0$-norm minimization, for Z and $\rho_e$ maps reconstruction. The algorithm was implemented on GPU to accelerate the predictive procedure and to support potential real-time adaptive treatment planning. The performance of the proposed method is demonstrated via both phantom and patient studies.

13.
Res Sq ; 2023 Jul 26.
Artículo en Inglés | MEDLINE | ID: mdl-37546731

RESUMEN

Objective: FLASH radiotherapy leverages ultra-high dose-rate radiation to enhance the sparing of organs at risk without compromising tumor control probability. This may allow dose escalation, toxicity mitigation, or both. To prepare for the ultra-high dose-rate delivery, we aim to develop a deep learning (DL)-based image-guide framework to enable fast volumetric image reconstruction for accurate target localization for proton FLASH beam delivery. Approach: The proposed framework comprises four modules, including orthogonal kV x-ray projection acquisition, DL-based volumetric image generation, image quality analyses, and water equivalent thickness (WET) evaluation. We investigated volumetric image reconstruction using kV projection pairs with four different source angles. Thirty patients with lung targets were identified from an institutional database, each patient having a four-dimensional computed tomography (CT) dataset with ten respiratory phases. Leave-phase-out cross-validation was performed to investigate the DL model's robustness for each patient. Main results: The proposed framework reconstructed patients' volumetric anatomy, including tumors and organs at risk from orthogonal x-ray projections. Considering all evaluation metrics, the kV projections with source angles of 135° and 225° yielded the optimal volumetric images. The patient-averaged mean absolute error, peak signal-to-noise ratio, structural similarity index measure, and WET error were 75±22 HU, 19±3.7 dB, 0.938±0.044, and -1.3%±4.1%. Significance: The proposed framework has been demonstrated to reconstruct volumetric images with a high degree of accuracy using two orthogonal x-ray projections. The embedded WET module can be used to detect potential proton beam-specific patient anatomy variations. This framework can rapidly deliver volumetric images to potentially guide proton FLASH therapy treatment delivery systems.

14.
ArXiv ; 2023 Jun 14.
Artículo en Inglés | MEDLINE | ID: mdl-37396614

RESUMEN

Background: The hippocampus plays a crucial role in memory and cognition. Because of the associated toxicity from whole brain radiotherapy, more advanced treatment planning techniques prioritize hippocampal avoidance, which depends on an accurate segmentation of the small and complexly shaped hippocampus. Purpose: To achieve accurate segmentation of the anterior and posterior regions of the hippocampus from T1 weighted (T1w) MRI images, we developed a novel model, Hippo-Net, which uses a mutually enhanced strategy. Methods: The proposed model consists of two major parts: 1) a localization model is used to detect the volume-of-interest (VOI) of hippocampus. 2) An end-to-end morphological vision transformer network is used to perform substructures segmentation within the hippocampus VOI. The substructures include the anterior and posterior regions of the hippocampus, which are defined as the hippocampus proper and parts of the subiculum. The vision transformer incorporates the dominant features extracted from MRI images, which are further improved by learning-based morphological operators. The integration of these morphological operators into the vision transformer increases the accuracy and ability to separate hippocampus structure into its two distinct substructures.A total of 260 T1w MRI datasets from Medical Segmentation Decathlon dataset were used in this study. We conducted a five-fold cross-validation on the first 200 T1w MR images and then performed a hold-out test on the remaining 60 T1w MR images with the model trained on the first 200 images. The segmentations were evaluated with two indicators, 1) multiple metrics including the Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95), mean surface distance (MSD), volume difference (VD) and center-of-mass distance (COMD); 2) Volumetric Pearson correlation analysis. Results: In five-fold cross-validation, the DSCs were 0.900±0.029 and 0.886±0.031 for the hippocampus proper and parts of the subiculum, respectively. The MSD were 0.426±0.115mm and 0.401±0.100 mm for the hippocampus proper and parts of the subiculum, respectively. Conclusions: The proposed method showed great promise in automatically delineating hippocampus substructures on T1w MRI images. It may facilitate the current clinical workflow and reduce the physicians' effort.

15.
Adv Radiat Oncol ; 8(5): 101267, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37408668

RESUMEN

Purpose: Proton vertebral body sparing craniospinal irradiation (CSI) treats the thecal sac while avoiding the anterior vertebral bodies in an effort to reduce myelosuppression and growth inhibition. However, robust treatment planning needs to compensate for proton range uncertainty, which contributes unwanted doses within the vertebral bodies. This work aimed to develop an early in vivo radiation damage quantification method using longitudinal magnetic resonance (MR) scans to quantify the dose effect during fractionated CSI. Methods and Materials: Ten pediatric patients were enrolled in a prospective clinical trial of proton vertebral body sparing CSI, in which they received 23.4 to 36 Gy. Monte Carlo robust planning was used, with spinal clinical target volumes defined as the thecal sac and neural foramina. T1/T2-weighted MR scans were acquired before, during, and after treatments to detect a transition from hematopoietic to less metabolically active fatty marrow. MR signal intensity histograms at each time point were analyzed and fitted by multi-Gaussian models to quantify radiation damage. Results: Fatty marrow filtration was observed in MR images as early as the fifth fraction of treatment. Maximum radiation-induced marrow damage occurred 40 to 50 days from the treatment start, followed by marrow regeneration. The mean damage ratios were 0.23, 0.41, 0.59, and 0.54, corresponding to 10, 20, 40, and 60 days from the treatment start. Conclusions: We demonstrated a noninvasive method for identifying early vertebral marrow damage based on radiation-induced fatty marrow replacement. The proposed method can be potentially used to quantify the quality of CSI vertebral sparing and preserve metabolically active hematopoietic bone marrow.

16.
J Appl Clin Med Phys ; 24(10): e14064, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37345557

RESUMEN

In this work, we demonstrate a method for rapid synthesis of high-quality CT images from unpaired, low-quality CBCT images, permitting CBCT-based adaptive radiotherapy. We adapt contrastive unpaired translation (CUT) to be used with medical images and evaluate the results on an institutional pelvic CT dataset. We compare the method against cycleGAN using mean absolute error, structural similarity index, root mean squared error, and Frèchet Inception Distance and show that CUT significantly outperforms cycleGAN while requiring less time and fewer resources. The investigated method improves the feasibility of online adaptive radiotherapy over the present state-of-the-art.


Asunto(s)
Tomografía Computarizada de Haz Cónico Espiral , Humanos , Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Dosificación Radioterapéutica , Planificación de la Radioterapia Asistida por Computador/métodos
17.
ArXiv ; 2023 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-37163137

RESUMEN

The advent of computed tomography significantly improves patients' health regarding diagnosis, prognosis, and treatment planning and verification. However, tomographic imaging escalates concomitant radiation doses to patients, inducing potential secondary cancer by 4%. We demonstrate the feasibility of a data-driven approach to synthesize volumetric images using patients' surface images, which can be obtained from a zero-dose surface imaging system. This study includes 500 computed tomography (CT) image sets from 50 patients. Compared to the ground truth CT, the synthetic images result in the evaluation metric values of 26.9 ± 4.1 Hounsfield units, 39.1 ± 1.0 dB, and 0.965 ± 0.011 regarding the mean absolute error, peak signal-to-noise ratio, and structural similarity index measure. This approach provides a data integration solution that can potentially enable real-time imaging, which is free of radiation-induced risk and could be applied to image-guided medical procedures.

19.
Phys Med Biol ; 68(10)2023 05 05.
Artículo en Inglés | MEDLINE | ID: mdl-37015231

RESUMEN

Objective. Artificial intelligence (AI) methods have gained popularity in medical imaging research. The size and scope of the training image datasets needed for successful AI model deployment does not always have the desired scale. In this paper, we introduce a medical image synthesis framework aimed at addressing the challenge of limited training datasets for AI models.Approach. The proposed 2D image synthesis framework is based on a diffusion model using a Swin-transformer-based network. This model consists of a forward Gaussian noise process and a reverse process using the transformer-based diffusion model for denoising. Training data includes four image datasets: chest x-rays, heart MRI, pelvic CT, and abdomen CT. We evaluated the authenticity, quality, and diversity of the synthetic images using visual Turing assessments conducted by three medical physicists, and four quantitative evaluations: the Inception score (IS), Fréchet Inception Distance score (FID), feature similarity and diversity score (DS, indicating diversity similarity) between the synthetic and true images. To leverage the framework value for training AI models, we conducted COVID-19 classification tasks using real images, synthetic images, and mixtures of both images.Main results. Visual Turing assessments showed an average accuracy of 0.64 (accuracy converging to50%indicates a better realistic visual appearance of the synthetic images), sensitivity of 0.79, and specificity of 0.50. Average quantitative accuracy obtained from all datasets were IS = 2.28, FID = 37.27, FDS = 0.20, and DS = 0.86. For the COVID-19 classification task, the baseline network obtained an accuracy of 0.88 using a pure real dataset, 0.89 using a pure synthetic dataset, and 0.93 using a dataset mixed of real and synthetic data.Significance. A image synthesis framework was demonstrated for medical image synthesis, which can generate high-quality medical images of different imaging modalities with the purpose of supplementing existing training sets for AI model deployment. This method has potential applications in many data-driven medical imaging research.


Asunto(s)
Inteligencia Artificial , COVID-19 , Humanos , COVID-19/diagnóstico por imagen , Tomografía Computarizada por Rayos X , Difusión , Modelos Estadísticos , Procesamiento de Imagen Asistido por Computador
20.
Phys Med Biol ; 68(9)2023 04 13.
Artículo en Inglés | MEDLINE | ID: mdl-36958049

RESUMEN

Objective. CBCTs in image-guided radiotherapy provide crucial anatomy information for patient setup and plan evaluation. Longitudinal CBCT image registration could quantify the inter-fractional anatomic changes, e.g. tumor shrinkage, and daily OAR variation throughout the course of treatment. The purpose of this study is to propose an unsupervised deep learning-based CBCT-CBCT deformable image registration which enables quantitative anatomic variation analysis.Approach.The proposed deformable registration workflow consists of training and inference stages that share the same feed-forward path through a spatial transformation-based network (STN). The STN consists of a global generative adversarial network (GlobalGAN) and a local GAN (LocalGAN) to predict the coarse- and fine-scale motions, respectively. The network was trained by minimizing the image similarity loss and the deformable vector field (DVF) regularization loss without the supervision of ground truth DVFs. During the inference stage, patches of local DVF were predicted by the trained LocalGAN and fused to form a whole-image DVF. The local whole-image DVF was subsequently combined with the GlobalGAN generated DVF to obtain the final DVF. The proposed method was evaluated using 100 fractional CBCTs from 20 abdominal cancer patients in the experiments and 105 fractional CBCTs from a cohort of 21 different abdominal cancer patients in a holdout test.Main Results. Qualitatively, the registration results show good alignment between the deformed CBCT images and the target CBCT image. Quantitatively, the average target registration error calculated on the fiducial markers and manually identified landmarks was 1.91 ± 1.18 mm. The average mean absolute error, normalized cross correlation between the deformed CBCT and target CBCT were 33.42 ± 7.48 HU, 0.94 ± 0.04, respectively.Significance. In summary, an unsupervised deep learning-based CBCT-CBCT registration method is proposed and its feasibility and performance in fractionated image-guided radiotherapy is investigated. This promising registration method could provide fast and accurate longitudinal CBCT alignment to facilitate inter-fractional anatomic changes analysis and prediction.


Asunto(s)
Aprendizaje Profundo , Neoplasias , Radioterapia Guiada por Imagen , Tomografía Computarizada de Haz Cónico Espiral , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada de Haz Cónico/métodos , Planificación de la Radioterapia Asistida por Computador
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...