Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
1.
J Xray Sci Technol ; 25(6): 907-926, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28697578

RESUMEN

BACKGROUND: In regularized iterative reconstruction algorithms, the selection of regularization parameter depends on the noise level of cone beam projection data. OBJECTIVE: Our aim is to propose an algorithm to estimate the noise level of cone beam projection data. METHODS: We first derived the data correlation of cone beam projection data in the Fourier domain, based on which, the signal and the noise were decoupled. Then the noise was extracted and averaged for estimation. An adaptive regularization parameter selection strategy was introduced based on the estimated noise level. Simulation and real data studies were conducted for performance validation. RESULTS: There exists an approximately zero-energy double-wedge area in the 3D Fourier domain of cone beam projection data. As for the noise level estimation results, the averaged relative errors of the proposed algorithm in the analytical/MC/spotlight-mode simulation experiments were 0.8%, 0.14% and 0.24%, respectively, and outperformed the homogeneous area based as well as the transformation based algorithms. Real studies indicated that the estimated noise levels were inversely proportional to the exposure levels, i.e., the slopes in the log-log plot were -1.0197 and -1.049 with respect to the short-scan and half-fan modes. The introduced regularization parameter selection strategy could deliver promising reconstructed image qualities. CONCLUSIONS: Based on the data correlation of cone beam projection data in Fourier domain, the proposed algorithm could estimate the noise level of cone beam projection data accurately and robustly. The estimated noise level could be used to adaptively select the regularization parameter.


Asunto(s)
Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Algoritmos , Humanos , Fantasmas de Imagen , Dispersión de Radiación
2.
IEEE Trans Med Imaging ; 43(1): 162-174, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37432808

RESUMEN

Four-dimensional magnetic resonance imaging (4D-MRI) is an emerging technique for tumor motion management in image-guided radiation therapy (IGRT). However, current 4D-MRI suffers from low spatial resolution and strong motion artifacts owing to the long acquisition time and patients' respiratory variations. If not managed properly, these limitations can adversely affect treatment planning and delivery in IGRT. In this study, we developed a novel deep learning framework called the coarse-super-resolution-fine network (CoSF-Net) to achieve simultaneous motion estimation and super-resolution within a unified model. We designed CoSF-Net by fully excavating the inherent properties of 4D-MRI, with consideration of limited and imperfectly matched training datasets. We conducted extensive experiments on multiple real patient datasets to assess the feasibility and robustness of the developed network. Compared with existing networks and three state-of-the-art conventional algorithms, CoSF-Net not only accurately estimated the deformable vector fields between the respiratory phases of 4D-MRI but also simultaneously improved the spatial resolution of 4D-MRI, enhancing anatomical features and producing 4D-MR images with high spatiotemporal resolution.


Asunto(s)
Radioterapia Guiada por Imagen , Humanos , Movimiento (Física) , Radioterapia Guiada por Imagen/métodos , Redes Neurales de la Computación , Imagen por Resonancia Magnética/métodos
3.
Phys Imaging Radiat Oncol ; 30: 100577, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38707629

RESUMEN

Background and purpose: Radiation-induced erectile dysfunction (RiED) commonly affects prostate cancer patients, prompting clinical trials across institutions to explore dose-sparing to internal-pudendal-arteries (IPA) for preserving sexual potency. IPA, challenging to segment, isn't conventionally considered an organ-at-risk (OAR). This study proposes a deep learning (DL) auto-segmentation model for IPA, using Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) or CT alone to accommodate varied clinical practices. Materials and methods: A total of 86 patients with CT and MRI images and noisy IPA labels were recruited in this study. We split the data into 42/14/30 for model training, testing, and a clinical observer study, respectively. There were three major innovations in this model: 1) we designed an architecture with squeeze-and-excite blocks and modality attention for effective feature extraction and production of accurate segmentation, 2) a novel loss function was used for training the model effectively with noisy labels, and 3) modality dropout strategy was used for making the model capable of segmentation in the absence of MRI. Results: Test dataset metrics were DSC 61.71 ± 7.7 %, ASD 2.5 ± .87 mm, and HD95 7.0 ± 2.3 mm. AI segmented contours showed dosimetric similarity to expert physician's contours. Observer study indicated higher scores for AI contours (mean = 3.7) compared to inexperienced physicians' contours (mean = 3.1). Inexperienced physicians improved scores to 3.7 when starting with AI contours. Conclusion: The proposed model achieved good quality IPA contours to improve uniformity of segmentation and to facilitate introduction of standardized IPA segmentation into clinical trials and practice.

4.
Phys Med Biol ; 68(4)2023 02 10.
Artículo en Inglés | MEDLINE | ID: mdl-36657169

RESUMEN

Cone-beam CT (CBCT)-based online adaptive radiotherapy calls for accurate auto-segmentation to reduce the time cost for physicians. However, deep learning (DL)-based direct segmentation of CBCT images is a challenging task, mainly due to the poor image quality and lack of well-labelled large training datasets. Deformable image registration (DIR) is often used to propagate the manual contours on the planning CT (pCT) of the same patient to CBCT. In this work, we undertake solving the problems mentioned above with the assistance of DIR. Our method consists of three main components. First, we use deformed pCT contours derived from multiple DIR methods between pCT and CBCT as pseudo labels for initial training of the DL-based direct segmentation model. Second, we use deformed pCT contours from another DIR algorithm as influencer volumes to define the region of interest for DL-based direct segmentation. Third, the initially trained DL model is further fine-tuned using a smaller set of true labels. Nine patients are used for model evaluation. We found that DL-based direct segmentation on CBCT without influencer volumes has much poorer performance compared to DIR-based segmentation. However, adding deformed pCT contours as influencer volumes in the direct segmentation network dramatically improves segmentation performance, reaching the accuracy level of DIR-based segmentation. The DL model with influencer volumes can be further improved through fine-tuning using a smaller set of true labels, achieving mean Dice similarity coefficient of 0.86, Hausdorff distance at the 95th percentile of 2.34 mm, and average surface distance of 0.56 mm. A DL-based direct CBCT segmentation model can be improved to outperform DIR-based segmentation models by using deformed pCT contours as pseudo labels and influencer volumes for initial training, and by using a smaller set of true labels for model fine tuning.


Asunto(s)
Aprendizaje Profundo , Planificación de la Radioterapia Asistida por Computador , Humanos , Planificación de la Radioterapia Asistida por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada de Haz Cónico/métodos , Algoritmos
5.
Med Phys ; 50(4): 1947-1961, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36310403

RESUMEN

PURPOSE: Online adaptive radiotherapy (ART) requires accurate and efficient auto-segmentation of target volumes and organs-at-risk (OARs) in mostly cone-beam computed tomography (CBCT) images, which often have severe artifacts and lack soft-tissue contrast, making direct segmentation very challenging. Propagating expert-drawn contours from the pretreatment planning CT through traditional or deep learning (DL)-based deformable image registration (DIR) can achieve improved results in many situations. Typical DL-based DIR models are population based, that is, trained with a dataset for a population of patients, and so they may be affected by the generalizability problem. METHODS: In this paper, we propose a method called test-time optimization (TTO) to refine a pretrained DL-based DIR population model, first for each individual test patient, and then progressively for each fraction of online ART treatment. Our proposed method is less susceptible to the generalizability problem and thus can improve overall performance of different DL-based DIR models by improving model accuracy, especially for outliers. Our experiments used data from 239 patients with head-and-neck squamous cell carcinoma to test the proposed method. First, we trained a population model with 200 patients and then applied TTO to the remaining 39 test patients by refining the trained population model to obtain 39 individualized models. We compared each of the individualized models with the population model in terms of segmentation accuracy. RESULTS: The average improvement of the Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD95) of segmentation can be up to 0.04 (5%) and 0.98 mm (25%), respectively, with the individualized models compared to the population model over 17 selected OARs and a target of 39 patients. Although the average improvement may seem mild, we found that the improvement for outlier patients with structures of large anatomical changes is significant. The number of patients with at least 0.05 DSC improvement or 2 mm HD95 improvement by TTO averaged over the 17 selected structures for the state-of-the-art architecture VoxelMorph is 10 out of 39 test patients. By deriving the individualized model using TTO from the pretrained population model, TTO models can be ready in about 1 min. We also generated the adapted fractional models for each of the 39 test patients by progressively refining the individualized models using TTO to CBCT images acquired at later fractions of online ART treatment. When adapting the individualized model to a later fraction of the same patient, the model can be ready in less than a minute with slightly improved accuracy. CONCLUSIONS: The proposed TTO method is well suited for online ART and can boost segmentation accuracy for DL-based DIR models, especially for outlier patients where the pretrained models fail.


Asunto(s)
Neoplasias de Cabeza y Cuello , Tomografía Computarizada de Haz Cónico Espiral , Humanos , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Neoplasias de Cabeza y Cuello/radioterapia , Planificación de la Radioterapia Asistida por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada de Haz Cónico/métodos
6.
IEEE Trans Med Imaging ; 42(6): 1835-1845, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-37022248

RESUMEN

In this study, we proposed a computer-aided diagnosis (CADx) framework under dual-energy spectral CT (DECT), which operates directly on the transmission data in the pre-log domain, called CADxDE, to explore the spectral information for lesion diagnosis. The CADxDE includes material identification and machine learning (ML) based CADx. Benefits from DECT's capability of performing virtual monoenergetic imaging with the identified materials, the responses of different tissue types (e.g., muscle, water, and fat) in lesions at each energy can be explored by ML for CADx. Without losing essential factors in the DECT scan, a pre-log domain model-based iterative reconstruction is adopted to obtain decomposed material images, which are then used to generate the virtual monoenergetic images (VMIs) at selected n energies. While these VMIs have the same anatomy, their contrast distribution patterns contain rich information along with the n energies for tissue characterization. Thus, a corresponding ML-based CADx is developed to exploit the energy-enhanced tissue features for differentiating malignant from benign lesions. Specifically, an original image-driven multi-channel three-dimensional convolutional neural network (CNN) and extracted lesion feature-based ML CADx methods are developed to show the feasibility of CADxDE. Results from three pathologically proven clinical datasets showed 4.01% to 14.25% higher AUC (area under the receiver operating characteristic curve) scores than the scores of both the conventional DECT data (high and low energy spectrum separately) and the conventional CT data. The mean gain >9.13% in AUC scores indicated that the energy spectral-enhanced tissue features from CADxDE have great potential to improve lesion diagnosis performance.


Asunto(s)
Diagnóstico por Computador , Redes Neurales de la Computación , Diagnóstico por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Curva ROC , Aprendizaje Automático
7.
Med Phys ; 50(12): 7368-7382, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37358195

RESUMEN

BACKGROUND: MRI-only radiotherapy planning (MROP) is beneficial to patients by avoiding MRI/CT registration errors, simplifying the radiation treatment simulation workflow and reducing exposure to ionizing radiation. MRI is the primary imaging modality for soft tissue delineation. Treatment planning CTs (i.e., CT simulation scan) are redundant if a synthetic CT (sCT) can be generated from the MRI to provide the patient positioning and electron density information. Unsupervised deep learning (DL) models like CycleGAN are widely used in MR-to-sCT conversion, when paired patient CT and MR image datasets are not available for model training. However, compared to supervised DL models, they cannot guarantee anatomic consistency, especially around bone. PURPOSE: The purpose of this work was to improve the sCT accuracy generated from MRI around bone for MROP. METHODS: To generate more reliable bony structures on sCT images, we proposed to add bony structure constraints in the unsupervised CycleGAN model's loss function and leverage Dixon constructed fat and in-phase (IP) MR images. Dixon images provide better bone contrast than T2-weighted images as inputs to a modified multi-channel CycleGAN. A private dataset with a total of 31 prostate cancer patients were used for training (20) and testing (11). RESULTS: We compared model performance with and without bony structure constraints using single- and multi-channel inputs. Among all the models, multi-channel CycleGAN with bony structure constraints had the lowest mean absolute error, both inside the bone and whole body (50.7 and 145.2 HU). This approach also resulted in the highest Dice similarity coefficient (0.88) of all bony structures compared with the planning CT. CONCLUSION: Modified multi-channel CycleGAN with bony structure constraints, taking Dixon-constructed fat and IP images as inputs, can generate clinically suitable sCT images in both bone and soft tissue. The generated sCT images have the potential to be used for accurate dose calculation and patient positioning in MROP radiation therapy.


Asunto(s)
Radioterapia de Intensidad Modulada , Masculino , Humanos , Radioterapia de Intensidad Modulada/métodos , Planificación de la Radioterapia Asistida por Computador/métodos , Dosificación Radioterapéutica , Imagen por Resonancia Magnética/métodos , Tomografía Computarizada por Rayos X/métodos , Pelvis , Procesamiento de Imagen Asistido por Computador/métodos
8.
Phys Med Biol ; 67(11)2022 05 24.
Artículo en Inglés | MEDLINE | ID: mdl-35483350

RESUMEN

Objective.Real-time imaging is highly desirable in image-guided radiotherapy, as it provides instantaneous knowledge of patients' anatomy and motion during treatments and enables online treatment adaptation to achieve the highest tumor targeting accuracy. Due to extremely limited acquisition time, only one or few x-ray projections can be acquired for real-time imaging, which poses a substantial challenge to localize the tumor from the scarce projections. For liver radiotherapy, such a challenge is further exacerbated by the diminished contrast between the tumor and the surrounding normal liver tissues. Here, we propose a framework combining graph neural network-based deep learning and biomechanical modeling to track liver tumor in real-time from a single onboard x-ray projection.Approach.Liver tumor tracking is achieved in two steps. First, a deep learning network is developed to predict the liver surface deformation using image features learned from the x-ray projection. Second, the intra-liver deformation is estimated through biomechanical modeling, using the liver surface deformation as the boundary condition to solve tumor motion by finite element analysis. The accuracy of the proposed framework was evaluated using a dataset of 10 patients with liver cancer.Main results.The results show accurate liver surface registration from the graph neural network-based deep learning model, which translates into accurate, fiducial-less liver tumor localization after biomechanical modeling (<1.2 (±1.2) mm average localization error).Significance.The method demonstrates its potentiality towards intra-treatment and real-time 3D liver tumor monitoring and localization. It could be applied to facilitate 4D dose accumulation, multi-leaf collimator tracking and real-time plan adaptation. The method can be adapted to other anatomical sites as well.


Asunto(s)
Neoplasias Hepáticas , Radioterapia Guiada por Imagen , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/radioterapia , Redes Neurales de la Computación , Radiografía , Radioterapia Guiada por Imagen/métodos , Rayos X
9.
IEEE J Biomed Health Inform ; 26(12): 6105-6115, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36367915

RESUMEN

Quantification of left ventricular (LV) ejection fraction (EF) from echocardiography depends upon the identification of endocardium boundaries as well as the calculation of end-diastolic (ED) and end-systolic (ES) LV volumes. It's critical to segment the LV cavity for precise calculation of EF from echocardiography. Most of the existing echocardiography segmentation approaches either only segment ES and ED frames without leveraging the motion information, or the motion information is only utilized as an auxiliary task. To address the above drawbacks, in this work, we propose a novel echocardiography segmentation method which can effectively utilize the underlying motion information by accurately predicting optical flow (OF) fields. First, we devised a feature extractor shared by the segmentation and the optical flow sub-tasks for efficient information exchange. Then, we proposed a new orientation congruency constraint for the OF estimation sub-task by promoting the congruency of optical flow orientation between successive frames. Finally, we design a motion-enhanced segmentation module for the final segmentation. Experimental results show that the proposed method achieved state-of-the-art performance for EF estimation, with a Pearson correlation coefficient of 0.893 and a Mean Absolute Error of 5.20% when validated with echo sequences of 450 patients.


Asunto(s)
Flujo Optico , Humanos , Ecocardiografía/métodos , Función Ventricular Izquierda , Volumen Sistólico , Ventrículos Cardíacos/diagnóstico por imagen
10.
Radiol Artif Intell ; 4(5): e210214, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-36204538

RESUMEN

Purpose: To present a concept called artificial intelligence-assisted contour editing (AIACE) and demonstrate its feasibility. Materials and Methods: The conceptual workflow of AIACE is as follows: Given an initial contour that requires clinician editing, the clinician indicates where large editing is needed, and a trained deep learning model uses this input to update the contour. This process repeats until a clinically acceptable contour is achieved. In this retrospective, proof-of-concept study, the authors demonstrated the concept on two-dimensional (2D) axial CT images from three head-and-neck cancer datasets by simulating the interaction with the AIACE model to mimic the clinical environment. The input at each iteration was one mouse click on the desired location of the contour segment. Model performance is quantified with the Dice similarity coefficient (DSC) and 95th percentile of Hausdorff distance (HD95) based on three datasets with sample sizes of 10, 28, and 20 patients. Results: The average DSCs and HD95 values of the automatically generated initial contours were 0.82 and 4.3 mm, 0.73 and 5.6 mm, and 0.67 and 11.4 mm for the three datasets, which were improved to 0.91 and 2.1 mm, 0.86 and 2.5 mm, and 0.86 and 3.3 mm, respectively, with three mouse clicks. Each deep learning-based contour update required about 20 msec. Conclusion: The authors proposed the newly developed AIACE concept, which uses deep learning models to assist clinicians in editing contours efficiently and effectively, and demonstrated its feasibility by using 2D axial CT images from three head-and-neck cancer datasets.Keywords: Segmentation, Convolutional Neural Network (CNN), CT, Deep Learning Algorithms Supplemental material is available for this article. © RSNA, 2022.

11.
Med Phys ; 48(5): 2258-2270, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-33621348

RESUMEN

PURPOSE: Despite the indispensable role of x-ray computed tomography (CT) in diagnostic medicine, the associated harmful ionizing radiation dose is a major concern, as it may cause genetic diseases and cancer. Decreasing patients' exposure can reduce the radiation dose and hence the related risks, but it would inevitably induce higher quantum noise. Supervised deep learning techniques have been used to train deep neural networks for denoising low-dose CT (LDCT) images, but the success of such strategies requires massive sets of pixel-level paired LDCT and normal-dose CT (NDCT) images, which are rarely available in real clinical practice. Our purpose is to mitigate the data scarcity problem for deep learning-based LDCT denoising. METHODS: To solve this problem, we devised a shift-invariant property-based neural network that uses only the LDCT images to characterize both the inherent pixel correlations and the noise distribution, shaping into our probabilistic self-learning (PSL) framework. The AAPM Low-dose CT Challenge dataset was used to train the network. Both simulated datasets and real dataset were employed to test the denoising performance as well as the model generalizability. The performance was compared to a conventional method (total variation (TV)-based), a popular self-learning method (noise2void (N2V)), and a well-known unsupervised learning method (CycleGAN) by using both qualitative visual inspection and quantitative metrics including peak signal-noise-ratio (PSNR), structural similarity index (SSIM) and contrast-to-noise-ratio (CNR). The standard deviations (STD) of selected flat regions were also calculated for comparison. RESULTS: The PSL method can improve the averaged PSNR/SSIM values from 27.61/0.5939 (LDCT) to 30.50/0.6797. By contrast, the averaged PSNR/SSIM values were 31.49/0.7284 (TV), 29.43/0.6699 (N2V), and 29.79/0.6992 (CycleGAN). The averaged STDs of selected flat regions were calculated to be 132.3 HU (LDCT), 25.77 HU (TV), 19.95 HU (N2V), 75.06 HU (CycleGAN), 60.62 HU (PSL) and 57.28 HU (NDCT). As for the low-contrast lesion detectability quantification, the CNR were calculated to be 0.202 (LDCT), 0.356 (TV), 0.372 (N2V), 0.383 (CycleGAN), 0.399 (PSL), and 0.359 (NDCT). By visual inspection, we observed that the proposed PSL method can deliver a noise-suppressed and detail-preserved image, while the TV-based method would lead to the blocky artifact, the N2V method would produce over-smoothed structures and CT value biased effect, and the CycleGAN method would generate slightly noisy results with inaccurate CT values. We also verified the generalizability of the PSL method, which exhibited superior denoising performance among various testing datasets with different data distribution shifts. CONCLUSIONS: A deep learning-based convolutional neural network can be trained without paired datasets. Qualitatively visual inspection showed the proposed PSL method can achieve superior denoising performance than all the competitors, despite that the employed quantitative metrics in terms of PSNR, SSIM and CNR did not always show consistently better values.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Artefactos , Humanos , Redes Neurales de la Computación , Relación Señal-Ruido
12.
Biomed Phys Eng Express ; 7(2)2021 02 24.
Artículo en Inglés | MEDLINE | ID: mdl-33545707

RESUMEN

Background and purpose.Replacing CT imaging with MR imaging for MR-only radiotherapy has sparked the interest of many scientists and is being increasingly adopted in radiation oncology. Although many studies have focused on generating CT images from MR images, only models on data with the same dataset were tested. Therefore, how well the trained model will work for data from different hospitals and MR protocols is still unknown. In this study, we addressed the model generalization problem for the MR-to-CT conversion task.Materials and methods.Brain T2 MR and corresponding CT images were collected from SZSPH (source domain dataset), brain T1-FLAIR, T1-POST MR, and corresponding CT images were collected from The University of Texas Southwestern (UTSW) (target domain dataset). To investigate the model's generalizability ability, four potential solutions were proposed: source model, target model, combined model, and adapted model. All models were trained using the CycleGAN network. The source model was trained with a source domain dataset from scratch and tested with a target domain dataset. The target model was trained with a target domain dataset and tested with a target domain dataset. The combined model was trained with both source domain and target domain datasets, and tested with the target domain dataset. The adapted model used a transfer learning strategy to train a CycleGAN model with a source domain dataset and retrain the pre-trained model with a target domain dataset. MAE, RMSE, PSNR, and SSIM were used to quantitatively evaluate model performance on a target domain dataset.Results.The adapted model achieved best quantitative results of 74.56 ± 8.61, 193.18 ± 17.98, 28.30 ± 0.83, and 0.84 ± 0.01 for MAE, RMSE, PSNR, and SSIM using the T1-FLAIR dataset and 74.89 ± 15.64, 195.73 ± 31.29, 27.72 ± 1.43, and 0.83 ± 0.04 for MAE, RMSE, PSNR, and SSIM using the T1-POST dataset. The source model had the poorest performance.Conclusions.This work indicates high generalization ability to generate synthetic CT images from small training datasets of MR images using pre-trained CycleGAN. The quantitative results of the test data, including different scanning protocols and different acquisition centers, indicated the proof of this concept.


Asunto(s)
Aprendizaje Profundo , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética , Tomografía Computarizada por Rayos X
13.
Med Phys ; 48(8): 4438-4447, 2021 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-34091925

RESUMEN

PURPOSE: Radiation therapy treatment planning is a trial-and-error, often time-consuming process. An approximately optimal dose distribution corresponding to a specific patient's anatomy can be predicted by using pre-trained deep learning (DL) models. However, dose distributions are often optimized based not only on patient-specific anatomy but also on physicians' preferred trade-offs between planning target volume (PTV) coverage and organ at risk (OAR) sparing or among different OARs. Therefore, it is desirable to allow physicians to fine-tune the dose distribution predicted based on patient anatomy. In this work, we developed a DL model to predict the individualized 3D dose distributions by using not only the patient's anatomy but also the desired PTV/OAR trade-offs, as represented by a dose volume histogram (DVH), as inputs. METHODS: In this work, we developed a modified U-Net network to predict the 3D dose distribution by using patient PTV/OAR masks and the desired DVH as inputs. The desired DVH, fine-tuned by physicians from the initially predicted DVH, is first projected onto the Pareto surface, then converted into a vector, and then concatenated with feature maps encoded from the PTV/OAR masks. The network output for training is the dose distribution corresponding to the Pareto optimal DVH. The training/validation datasets contain 77 prostate cancer patients, and the testing dataset has 20 patients. RESULTS: The trained model can predict a 3D dose distribution that is approximately Pareto optimal while having the DVH closest to the input desired DVH. We calculated the difference between the predicted dose distribution and the optimized dose distribution that has a DVH closest to the desired one for the PTV and for all OARs as a quantitative evaluation. The largest absolute error in mean dose was about 3.6% of the prescription dose, and the largest absolute error in the maximum dose was about 2.0% of the prescription dose. CONCLUSIONS: In this feasibility study, we have developed a 3D U-Net model with the patient's anatomy and the desired DVH curves as inputs to predict an individualized 3D dose distribution that is approximately Pareto optimal while having the DVH closest to the desired one. The predicted dose distributions can be used as references for dosimetrists and physicians to rapidly develop a clinically acceptable treatment plan.


Asunto(s)
Aprendizaje Profundo , Radioterapia de Intensidad Modulada , Estudios de Factibilidad , Humanos , Masculino , Órganos en Riesgo , Dosificación Radioterapéutica , Planificación de la Radioterapia Asistida por Computador
14.
IEEE Trans Med Imaging ; 40(11): 2965-2975, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34329156

RESUMEN

Low-dose computed tomography (LDCT) is desirable for both diagnostic imaging and image-guided interventions. Denoisers are widely used to improve the quality of LDCT. Deep learning (DL)-based denoisers have shown state-of-the-art performance and are becoming mainstream methods. However, there are two challenges to using DL-based denoisers: 1) a trained model typically does not generate different image candidates with different noise-resolution tradeoffs, which are sometimes needed for different clinical tasks; and 2) the model's generalizability might be an issue when the noise level in the testing images differs from that in the training dataset. To address these two challenges, in this work, we introduce a lightweight optimization process that can run on top of any existing DL-based denoiser during the testing phase to generate multiple image candidates with different noise-resolution tradeoffs suitable for different clinical tasks in real time. Consequently, our method allows users to interact with the denoiser to efficiently review various image candidates and quickly pick the desired one; thus, we termed this method deep interactive denoiser (DID). Experimental results demonstrated that DID can deliver multiple image candidates with different noise-resolution tradeoffs and shows great generalizability across various network architectures, as well as training and testing datasets with various noise levels.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X
15.
IEEE Trans Med Imaging ; 36(12): 2466-2478, 2017 12.
Artículo en Inglés | MEDLINE | ID: mdl-28981411

RESUMEN

Despite the rapid developments of X-ray cone-beam CT (CBCT), image noise still remains a major issue for the low dose CBCT. To suppress the noise effectively while retain the structures well for low dose CBCT image, in this paper, a sparse constraint based on the 3-D dictionary is incorporated into a regularized iterative reconstruction framework, defining the 3-D dictionary learning (3-DDL) method. In addition, by analyzing the sparsity level curve associated with different regularization parameters, a new adaptive parameter selection strategy is proposed to facilitate our 3-DDL method. To justify the proposed method, we first analyze the distributions of the representation coefficients associated with the 3-D dictionary and the conventional 2-D dictionary to compare their efficiencies in representing volumetric images. Then, multiple real data experiments are conducted for performance validation. Based on these results, we found: 1) the 3-D dictionary-based sparse coefficients have three orders narrower Laplacian distribution compared with the 2-D dictionary, suggesting the higher representation efficiencies of the 3-D dictionary; 2) the sparsity level curve demonstrates a clear Z-shape, and hence referred to as Z-curve, in this paper; 3) the parameter associated with the maximum curvature point of the Z-curve suggests a nice parameter choice, which could be adaptively located with the proposed Z-index parameterization (ZIP) method; 4) the proposed 3-DDL algorithm equipped with the ZIP method could deliver reconstructions with the lowest root mean squared errors and the highest structural similarity index compared with the competing methods; 5) similar noise performance as the regular dose FDK reconstruction regarding the standard deviation metric could be achieved with the proposed method using (1/2)/(1/4)/(1/8) dose level projections. The contrast-noise ratio is improved by ~2.5/3.5 times with respect to two different cases under the (1/8) dose level compared with the low dose FDK reconstruction. The proposed method is expected to reduce the radiation dose by a factor of 8 for CBCT, considering the voted strongly discriminated low contrast tissues.


Asunto(s)
Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Algoritmos , Bases de Datos Factuales , Femenino , Cabeza/diagnóstico por imagen , Humanos , Masculino , Próstata/diagnóstico por imagen
16.
Phys Med Biol ; 60(9): 3567-87, 2015 May 07.
Artículo en Inglés | MEDLINE | ID: mdl-25860299

RESUMEN

Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 to 3 HU and from 78 to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 s including the time for both the scatter estimation and CBCT reconstruction steps. The efficacy of our method and its high computational efficiency make our method attractive for clinical use.


Asunto(s)
Tomografía Computarizada de Haz Cónico/métodos , Radioterapia Guiada por Imagen/métodos , Programas Informáticos , Método de Montecarlo , Dispersión de Radiación
17.
Med Phys ; 41(11): 111912, 2014 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-25370645

RESUMEN

PURPOSE: Compressed sensing (CS)-based iterative reconstruction (IR) techniques are able to reconstruct cone-beam CT (CBCT) images from undersampled noisy data, allowing for imaging dose reduction. However, there are a few practical concerns preventing the clinical implementation of these techniques. On the image quality side, data truncation along the superior-inferior direction under the cone-beam geometry produces severe cone artifacts in the reconstructed images. Ring artifacts are also seen in the half-fan scan mode. On the reconstruction efficiency side, the long computation time hinders clinical use in image-guided radiation therapy (IGRT). METHODS: Image quality improvement methods are proposed to mitigate the cone and ring image artifacts in IR. The basic idea is to use weighting factors in the IR data fidelity term to improve projection data consistency with the reconstructed volume. In order to improve the computational efficiency, a multiple graphics processing units (GPUs)-based CS-IR system was developed. The parallelization scheme, detailed analyses of computation time at each step, their relationship with image resolution, and the acceleration factors were studied. The whole system was evaluated in various phantom and patient cases. RESULTS: Ring artifacts can be mitigated by properly designing a weighting factor as a function of the spatial location on the detector. As for the cone artifact, without applying a correction method, it contaminated 13 out of 80 slices in a head-neck case (full-fan). Contamination was even more severe in a pelvis case under half-fan mode, where 36 out of 80 slices were affected, leading to poorer soft tissue delineation and reduced superior-inferior coverage. The proposed method effectively corrects those contaminated slices with mean intensity differences compared to FDK results decreasing from ∼497 and ∼293 HU to ∼39 and ∼27 HU for the full-fan and half-fan cases, respectively. In terms of efficiency boost, an overall 3.1 × speedup factor has been achieved with four GPU cards compared to a single GPU-based reconstruction. The total computation time is ∼30 s for typical clinical cases. CONCLUSIONS: The authors have developed a low-dose CBCT IR system for IGRT. By incorporating data consistency-based weighting factors in the IR model, cone/ring artifacts can be mitigated. A boost in computational efficiency is achieved by multi-GPU implementation.


Asunto(s)
Gráficos por Computador , Tomografía Computarizada de Haz Cónico , Procesamiento de Imagen Asistido por Computador/métodos , Dosis de Radiación , Radioterapia Guiada por Imagen , Algoritmos , Artefactos , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Neoplasias de Cabeza y Cuello/radioterapia , Humanos , Fantasmas de Imagen , Factores de Tiempo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA