Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 127
Filter
1.
Med Phys ; 2024 Jun 19.
Article in English | MEDLINE | ID: mdl-38896829

ABSTRACT

BACKGROUND: Head and neck (HN) gross tumor volume (GTV) auto-segmentation is challenging due to the morphological complexity and low image contrast of targets. Multi-modality images, including computed tomography (CT) and positron emission tomography (PET), are used in the routine clinic to assist radiation oncologists for accurate GTV delineation. However, the availability of PET imaging may not always be guaranteed. PURPOSE: To develop a deep learning segmentation framework for automated GTV delineation of HN cancers using a combination of PET/CT images, while addressing the challenge of missing PET data. METHODS: Two datasets were included for this study: Dataset I: 524 (training) and 359 (testing) oropharyngeal cancer patients from different institutions with their PET/CT pairs provided by the HECKTOR Challenge; Dataset II: 90 HN patients(testing) from a local institution with their planning CT, PET/CT pairs. To handle potentially missing PET images, a model training strategy named the "Blank Channel" method was implemented. To simulate the absence of a PET image, a blank array with the same dimensions as the CT image was generated to meet the dual-channel input requirement of the deep learning model. During the model training process, the model was randomly presented with either a real PET/CT pair or a blank/CT pair. This allowed the model to learn the relationship between the CT image and the corresponding GTV delineation based on available modalities. As a result, our model had the ability to handle flexible inputs during prediction, making it suitable for cases where PET images are missing. To evaluate the performance of our proposed model, we trained it using training patients from Dataset I and tested it with Dataset II. We compared our model (Model 1) with two other models which were trained for specific modality segmentations: Model 2 trained with only CT images, and Model 3 trained with real PET/CT pairs. The performance of the models was evaluated using quantitative metrics, including Dice similarity coefficient (DSC), mean surface distance (MSD), and 95% Hausdorff Distance (HD95). In addition, we evaluated our Model 1 and Model 3 using the 359 test cases in Dataset I. RESULTS: Our proposed model(Model 1) achieved promising results for GTV auto-segmentation using PET/CT images, with the flexibility of missing PET images. Specifically, when assessed with only CT images in Dataset II, Model 1 achieved DSC of 0.56 ± 0.16, MSD of 3.4 ± 2.1 mm, and HD95 of 13.9 ± 7.6 mm. When the PET images were included, the performance of our model was improved to DSC of 0.62 ± 0.14, MSD of 2.8 ± 1.7 mm, and HD95 of 10.5 ± 6.5 mm. These results are comparable to those achieved by Model 2 and Model 3, illustrating Model 1's effectiveness in utilizing flexible input modalities. Further analysis using the test dataset from Dataset I showed that Model 1 achieved an average DSC of 0.77, surpassing the overall average DSC of 0.72 among all participants in the HECKTOR Challenge. CONCLUSIONS: We successfully refined a multi-modal segmentation tool for accurate GTV delineation for HN cancer. Our method addressed the issue of missing PET images by allowing flexible data input, thereby providing a practical solution for clinical settings where access to PET imaging may be limited.

2.
Phys Med Biol ; 69(11)2024 May 23.
Article in English | MEDLINE | ID: mdl-38697195

ABSTRACT

Objective. Dynamic cone-beam computed tomography (CBCT) can capture high-spatial-resolution, time-varying images for motion monitoring, patient setup, and adaptive planning of radiotherapy. However, dynamic CBCT reconstruction is an extremely ill-posed spatiotemporal inverse problem, as each CBCT volume in the dynamic sequence is only captured by one or a few x-ray projections, due to the slow gantry rotation speed and the fast anatomical motion (e.g. breathing).Approach. We developed a machine learning-based technique, prior-model-free spatiotemporal implicit neural representation (PMF-STINR), to reconstruct dynamic CBCTs from sequentially acquired x-ray projections. PMF-STINR employs a joint image reconstruction and registration approach to address the under-sampling challenge, enabling dynamic CBCT reconstruction from singular x-ray projections. Specifically, PMF-STINR uses spatial implicit neural representations to reconstruct a reference CBCT volume, and it applies temporal INR to represent the intra-scan dynamic motion of the reference CBCT to yield dynamic CBCTs. PMF-STINR couples the temporal INR with a learning-based B-spline motion model to capture time-varying deformable motion during the reconstruction. Compared with the previous methods, the spatial INR, the temporal INR, and the B-spline model of PMF-STINR are all learned on the fly during reconstruction in a one-shot fashion, without using any patient-specific prior knowledge or motion sorting/binning.Main results. PMF-STINR was evaluated via digital phantom simulations, physical phantom measurements, and a multi-institutional patient dataset featuring various imaging protocols (half-fan/full-fan, full sampling/sparse sampling, different energy and mAs settings, etc). The results showed that the one-shot learning-based PMF-STINR can accurately and robustly reconstruct dynamic CBCTs and capture highly irregular motion with high temporal (∼ 0.1 s) resolution and sub-millimeter accuracy.Significance. PMF-STINR can reconstruct dynamic CBCTs and solve the intra-scan motion from conventional 3D CBCT scans without using any prior anatomical/motion model or motion sorting/binning. It can be a promising tool for motion management by offering richer motion information than traditional 4D-CBCTs.


Subject(s)
Cone-Beam Computed Tomography , Image Processing, Computer-Assisted , Cone-Beam Computed Tomography/methods , Humans , Image Processing, Computer-Assisted/methods , Phantoms, Imaging , Machine Learning
3.
Med Phys ; 51(7): 4646-4654, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38648671

ABSTRACT

BACKGROUND: Data-driven gated (DDG) PET has gained clinical acceptance and has been shown to match or outperform external-device gated (EDG) PET. However, in most clinical applications, DDG PET is matched with helical CT acquired in free breathing (FB) at a random respiratory phase, leaving registration, and optimal attenuation correction (AC) to chance. Furthermore, DDG PET requires additional scan time to reduce image noise as it only preserves 35%-50% of the PET data at or near the end-expiratory phase of the breathing cycle. PURPOSE: A new full-counts, phase-matched (FCPM) DDG PET/CT was developed based on a low-dose cine CT to improve registration between DDG PET and DDG CT, to reduce image noise, and to avoid increasing acquisition times in DDG PET. METHODS: A new DDG CT was developed for three respiratory phases of CT images from a low dose cine CT acquisition of 1.35 mSv for a coverage of about 15.4 cm: end-inspiration (EI), average (AVG), and end-expiration (EE) to match with the three corresponding phases of DDG PET data: -10% to 15%; 15% to 30%, and 80% to 90%; and 30% to 80%, respectively. The EI and EE phases of DDG CT were selected based on the physiological changes in lung density and body outlines reflected in the dynamic cine CT images. The AVG phase was derived from averaging of all phases of the cine CT images. The cine CT was acquired over the lower lungs and/or upper abdomen for correction of misregistration between PET and FB CT as well as DDG PET and FB CT. The three phases of DDG CT were used for AC of the corresponding phases of PET. After phase-matched AC of each PET dataset, the EI and AVG PET data were registered to the EE PET data with deformable image registration. The final result was FCPM DDG PET/CT which accounts for all PET data registered at the EE phase. We applied this approach to 14 18F-FDG lung cancer patient studies acquired at 2 min/bed position on the GE Discovery MI (25-cm axial FOV) and evaluated its efficacy in improved quantification and noise reduction. RESULTS: Relative to static PET/CT, the SUVmax increases for the EI, AVG, EE, and FCPM DDG PET/CT were 1.67 ± 0.40, 1.50 ± 0.28, 1.64 ± 0.36, and 1.49 ± 0.28, respectively. There were 10.8% and 9.1% average decreases in SUVmax from EI and EE to FCPM DDG PET/CT, respectively. EI, AVG, and EE DDG PET/CT all maintained increased image noise relative to static PET/CT. However, the noise levels of FCPM and static PET were statistically equivalent, suggesting the inclusion of all counts was able to decrease the image noise relative to EI and EE DDG PET/CT. CONCLUSIONS: A new FCPM DDG PET/CT has been developed to account for 100% of collected PET data in DDG PET applications. Image noise in FCPM is comparable to static PET, while small decreases in SUVmax were also observed in FCPM when compared to either EI or EE DDG PET/CT.


Subject(s)
Image Processing, Computer-Assisted , Positron Emission Tomography Computed Tomography , Humans , Image Processing, Computer-Assisted/methods , Respiration , Signal-To-Noise Ratio , Phantoms, Imaging
4.
Comput Med Imaging Graph ; 113: 102353, 2024 04.
Article in English | MEDLINE | ID: mdl-38387114

ABSTRACT

Creating synthetic CT (sCT) from magnetic resonance (MR) images enables MR-based treatment planning in radiation therapy. However, the MR images used for MR-guided adaptive planning are often truncated in the boundary regions due to the limited field of view and the need for sequence optimization. Consequently, the sCT generated from these truncated MR images lacks complete anatomic information, leading to dose calculation error for MR-based adaptive planning. We propose a novel structure-completion generative adversarial network (SC-GAN) to generate sCT with full anatomic details from the truncated MR images. To enable anatomy compensation, we expand input channels of the CT generator by including a body mask and introduce a truncation loss between sCT and real CT. The body mask for each patient was automatically created from the simulation CT scans and transformed to daily MR images by rigid registration as another input for our SC-GAN in addition to the MR images. The truncation loss was constructed by implementing either an auto-segmentor or an edge detector to penalize the difference in body outlines between sCT and real CT. The experimental results show that our SC-GAN achieved much improved accuracy of sCT generation in both truncated and untruncated regions compared to the original cycleGAN and conditional GAN methods.


Subject(s)
Tomography, X-Ray Computed , Humans , Computer Simulation
5.
Med Phys ; 51(3): 1626-1636, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38285623

ABSTRACT

BACKGROUND: Misregistration between CT and PET data can result in mis-localization and inaccurate quantification of functional uptake in whole body PET/CT imaging. This problem is exacerbated when an abnormal inspiration occurs during the free-breathing helical CT (FB CT) used for attenuation correction of PET data. In data-driven gated (DDG) PET, the data selected for reconstruction is typically derived from the end-expiration (EE) phase of the breathing cycle, making this potential issue worse. PURPOSE: The objective of this study is to develop a deformable image registration (DIR)-based respiratory motion model to improve the registration and quantification between misregistered FB CT and PET. METHODS: Twenty-two whole-body 18 F-FDG PET/CT scans encompassing 48 lesions in misregistered regions were analyzed in this study. End-inspiration (EI) and EE PET data were derived from -10% to 15% and 30% to 80% of the breathing cycle, respectively. DIR was used to estimate a motion model from the EE to EI phase of the PET data. The model was then used to generate PET images at any phase of up to four times the amplitude of motion between EE and EI for correlation with the misregistered FB CT. Once a matched phase of the FB CT was determined, FB CT was deformed to a pseudo CT at the EE phase (DIR CT). DIR CT was compared with the ground truth DDG CT for AC and localization of the DDG PET. RESULTS: Between DDG PET/FB CT and DDG PET/DIR CT, a significant increase in ∆%SUV was observed (p < 0.01), with median values elevating from 26.7% to 42.4%. This new method was most effective for lesions ≤3 cm proximal to the diaphragm (p < 0.001) but showed decreasing efficacy as the distance increased. When FB CT was severely misregistered with DDG PET (>3 cm), DDG PET/DIR CT outperformed DDG PET/FB CT alone (p < 0.05). Even when patients showed varied breathing patterns during the PET/CT scan, DDG PET/DIR CT still surpassed the efficiency of DDG PET/FB CT (p < 0.01). Though DDG PET/DIR CT couldn't match the performance of the DDG PET/CT ground truth (42.4% vs. 53.6%, p < 0.01), it reached 84% of its quantification, demonstrating good agreement and a strong overall correlation (regression coefficient of 0.94, p < 0.0001). In some cases, anatomical distortion and blurring, and misregistration error were observed in DIR CT, rendering it still unable to correct inaccurate localization near the boundaries of two organs. CONCLUSIONS: Based on the motion model derived from gated PET data, DIR CT can significantly improve the quantification and localization of DDG PET. This approach can achieve a performance level of about 84% of the ground truth established by DDG PET/CT. These results show that self-gated PET and DIR CT may offer an alternative clinical solution to DDG PET and FB CT for quantification without the need for additional cine-CT imaging. DIR CT was at times inferior to DDG CT due to some distortion and blurring of anatomy and misregistration error.


Subject(s)
Positron Emission Tomography Computed Tomography , Respiration , Humans , Positron-Emission Tomography/methods , Tomography, X-Ray Computed/methods , Exhalation
6.
Eur J Nucl Med Mol Imaging ; 51(2): 358-368, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37787849

ABSTRACT

PURPOSE: Due to various physical degradation factors and limited counts received, PET image quality needs further improvements. The denoising diffusion probabilistic model (DDPM) was a distribution learning-based model, which tried to transform a normal distribution into a specific data distribution based on iterative refinements. In this work, we proposed and evaluated different DDPM-based methods for PET image denoising. METHODS: Under the DDPM framework, one way to perform PET image denoising was to provide the PET image and/or the prior image as the input. Another way was to supply the prior image as the network input with the PET image included in the refinement steps, which could fit for scenarios of different noise levels. 150 brain [[Formula: see text]F]FDG datasets and 140 brain [[Formula: see text]F]MK-6240 (imaging neurofibrillary tangles deposition) datasets were utilized to evaluate the proposed DDPM-based methods. RESULTS: Quantification showed that the DDPM-based frameworks with PET information included generated better results than the nonlocal mean, Unet and generative adversarial network (GAN)-based denoising methods. Adding additional MR prior in the model helped achieved better performance and further reduced the uncertainty during image denoising. Solely relying on MR prior while ignoring the PET information resulted in large bias. Regional and surface quantification showed that employing MR prior as the network input while embedding PET image as a data-consistency constraint during inference achieved the best performance. CONCLUSION: DDPM-based PET image denoising is a flexible framework, which can efficiently utilize prior information and achieve better performance than the nonlocal mean, Unet and GAN-based denoising methods.


Subject(s)
Image Processing, Computer-Assisted , Positron-Emission Tomography , Humans , Image Processing, Computer-Assisted/methods , Positron-Emission Tomography/methods , Signal-To-Noise Ratio , Models, Statistical , Algorithms
7.
IEEE Trans Med Imaging ; PP2023 Nov 23.
Article in English | MEDLINE | ID: mdl-37995174

ABSTRACT

Position emission tomography (PET) is widely used in clinics and research due to its quantitative merits and high sensitivity, but suffers from low signal-to-noise ratio (SNR). Recently convolutional neural networks (CNNs) have been widely used to improve PET image quality. Though successful and efficient in local feature extraction, CNN cannot capture long-range dependencies well due to its limited receptive field. Global multi-head self-attention (MSA) is a popular approach to capture long-range information. However, the calculation of global MSA for 3D images has high computational costs. In this work, we proposed an efficient spatial and channel-wise encoder-decoder transformer, Spach Transformer, that can leverage spatial and channel information based on local and global MSAs. Experiments based on datasets of different PET tracers, i.e., 18F-FDG, 18F-ACBC, 18F-DCFPyL, and 68Ga-DOTATATE, were conducted to evaluate the proposed framework. Quantitative results show that the proposed Spach Transformer framework outperforms state-of-the-art deep learning architectures.

8.
ArXiv ; 2023 Dec 04.
Article in English | MEDLINE | ID: mdl-38013886

ABSTRACT

Objective: Dynamic cone-beam computed tomography (CBCT) can capture high-spatial-resolution, time-varying images for motion monitoring, patient setup, and adaptive planning of radiotherapy. However, dynamic CBCT reconstruction is an extremely ill-posed spatiotemporal inverse problem, as each CBCT volume in the dynamic sequence is only captured by one or a few X-ray projections, due to the slow gantry rotation speed and the fast anatomical motion (e.g., breathing). Approach: We developed a machine learning-based technique, prior-model-free spatiotemporal implicit neural representation (PMF-STINR), to reconstruct dynamic CBCTs from sequentially acquired X-ray projections. PMF-STINR employs a joint image reconstruction and registration approach to address the under-sampling challenge, enabling dynamic CBCT reconstruction from singular X-ray projections. Specifically, PMF-STINR uses spatial implicit neural representation to reconstruct a reference CBCT volume, and it applies temporal INR to represent the intra-scan dynamic motion with respect to the reference CBCT to yield dynamic CBCTs. PMF-STINR couples the temporal INR with a learning-based B-spline motion model to capture time-varying deformable motion during the reconstruction. Compared with the previous methods, the spatial INR, the temporal INR, and the B-spline model of PMF-STINR are all learned on the fly during reconstruction in a one-shot fashion, without using any patient-specific prior knowledge or motion sorting/binning. Main results: PMF-STINR was evaluated via digital phantom simulations, physical phantom measurements, and a multi-institutional patient dataset featuring various imaging protocols (half-fan/full-fan, full sampling/sparse sampling, different energy and mAs settings, etc.). The results showed that the one-shot learning-based PMF-STINR can accurately and robustly reconstruct dynamic CBCTs and capture highly irregular motion with high temporal (~0.1s) resolution and sub-millimeter accuracy. Significance: PMF-STINR can reconstruct dynamic CBCTs and solve the intra-scan motion from conventional 3D CBCT scans without using any prior anatomical/motion model or motion sorting/binning. It can be a promising tool for motion management by offering richer motion information than traditional 4D-CBCTs.

9.
Clin Nucl Med ; 48(12): 1021-1027, 2023 Dec 01.
Article in English | MEDLINE | ID: mdl-37801580

ABSTRACT

PURPOSE: The aim of this study was to investigate the role of 18 F-DCFPyL PET/CT in the evaluation of prostate cancer (PC) patients after definitive treatment and with low-level prostate-specific antigen (PSA) level of ≤0.2 ng/mL. PATIENTS AND METHODS: This retrospective study was conducted in PC patients who received definitive treatments with PSA level of ≤0.2 ng/mL and underwent 18 F-DCFPyL PET/CT within a 1-week interval of PSA examination, and without interval treatment change or history of other cancer. Patient and tumor characteristics at initial diagnosis, treatment regimens, and findings on 18 F-DCFPyL PET/CT were collected. Patients with minimal 6-month (median, 11 months; range, 6-21 months) follow-up or definitive biopsy results of the suspected PET/CT findings were included. Imagine findings were reached with consensus among experienced board-certified nuclear medicine physicians. Comprehensive follow-up and/or biopsy results were used as definitive determination of presence or absence of disease. Comparisons between groups of positive and negative 18 F-DCFPyL PET/CT were done by using descriptive statistics. RESULTS: A total of 96 18 F-DCFPyL PET/CTs from 93 patients met the inclusion criteria. The median Gleason score (GS) of positive group is 8 (range, 6-10), whereas negative group is 7 (range, 6-10). The median age of positive group is 71 (range, 50-90), whereas negative group is 69 (range, 45-88). There were 49 positive (51%) and 47 negative 18 F-DCFPyL PET/CTs (49%). Detection rates at PSA level of ≤0.1 and 0.2 ng/mL were 58.7% (27/46) and 44% (22/50), respectively. The scan-based sensitivity, specificity, positive predictive value, and negative predictive value are 100%, 95%, 96%, and 100% in group with PSA level of ≤0.1 ng/mL, and 100%, 97%, 95%, and 100% in group with PSA level of 0.2 ng/mL, respectively. Sites of involvement on positive 18 F-DCFPyL PET/CTs were prostate bed, pelvic lymph nodes, bone, chest and supraclavicular lymph nodes, lung, and adrenal glands. The SUV max value on positive lesions ranged from 1.9 to 141.4; the smallest positive lymph node was 0.4 cm. High GS of 8-10, known metastatic status (M1), presence of extraprostatic extension, presence of seminal vesicle invasion, and very high-risk PC are significantly associated with positive 18 F-DCFPyL PET/CT results ( P < 0.05). Of all analyzed treatment regimes, upfront surgery (radical prostatectomy with or without pelvic lymph node dissection) had strong correlation with negative PET/CT results ( P < 0.001). If patients received ADT only, or ADT plus chemotherapy, the PET/CT results were most likely positive ( P = 0.026). For other treatment regimes, there were no statistical differences between the groups ( P > 0.05). CONCLUSIONS: In the presence of low PSA level in PC patients after definitive treatment, 18 F-DCFPyL PET/CT is most beneficial in detection of disease in patients with GS of 8 or higher at the time of diagnosis, and the ones who have history of ADT only, or ADT plus chemotherapy. There is excellent negative prediction value of 18 F-DCFPyL PET/CT. However, there is no cutoff PSA level for 18 F-DCFPyL PET/CT indication and no correlation between PSA level and SUV max of positive lesions on 18 F-DCFPyL PET/CT.


Subject(s)
Prostate-Specific Antigen , Prostatic Neoplasms , Male , Humans , Positron Emission Tomography Computed Tomography/methods , Retrospective Studies , Prostatic Neoplasms/pathology , Prostate/pathology
10.
Comput Med Imaging Graph ; 108: 102286, 2023 09.
Article in English | MEDLINE | ID: mdl-37625307

ABSTRACT

Deformable image registration (DIR) between daily and reference images is fundamentally important for adaptive radiotherapy. In the last decade, deep learning-based image registration methods have been developed with faster computation time and improved robustness compared to traditional methods. However, the registration performance is often degraded in extra-cranial sites with large volume containing multiple anatomic regions, such as Computed Tomography (CT)/Magnetic Resonance (MR) images used in head and neck (HN) radiotherapy. In this study, we developed a hierarchical deformable image registration (DIR) framework, Patch-based Registration Network (Patch-RegNet), to improve the accuracy and speed of CT-MR and MR-MR registration for head-and-neck MR-Linac treatments. Patch-RegNet includes three steps: a whole volume global registration, a patch-based local registration, and a patch-based deformable registration. Following a whole-volume rigid registration, the input images were divided into overlapping patches. Then a patch-based rigid registration was applied to achieve accurate local alignment for subsequent DIR. We developed a ViT-Morph model, a combination of a convolutional neural network (CNN) and the Vision Transformer (ViT), for the patch-based DIR. A modality independent neighborhood descriptor was adopted in our model as the similarity metric to account for both inter-modality and intra-modality registration. The CT-MR and MR-MR DIR models were trained with 242 CT-MR and 213 MR-MR image pairs from 36 patients, respectively, and both tested with 24 image pairs (CT-MR and MR-MR) from 6 other patients. The registration performance was evaluated with 7 manually contoured organs (brainstem, spinal cord, mandible, left/right parotids, left/right submandibular glands) by comparing with the traditional registration methods in Monaco treatment planning system and the popular deep learning-based DIR framework, Voxelmorph. Evaluation results show that our method outperformed VoxelMorph by 6 % for CT-MR registration, and 4 % for MR-MR registration based on DSC measurements. Our hierarchical registration framework has been demonstrated achieving significantly improved DIR accuracy of both CT-MR and MR-MR registration for head-and-neck MR-guided adaptive radiotherapy.


Subject(s)
Brain Stem , Multimodal Imaging , Humans , Neural Networks, Computer
11.
Phys Med Biol ; 68(4)2023 02 06.
Article in English | MEDLINE | ID: mdl-36638543

ABSTRACT

Objective. Dynamic cone-beam CT (CBCT) imaging is highly desired in image-guided radiation therapy to provide volumetric images with high spatial and temporal resolutions to enable applications including tumor motion tracking/prediction and intra-delivery dose calculation/accumulation. However, dynamic CBCT reconstruction is a substantially challenging spatiotemporal inverse problem, due to the extremely limited projection sample available for each CBCT reconstruction (one projection for one CBCT volume).Approach. We developed a simultaneous spatial and temporal implicit neural representation (STINR) method for dynamic CBCT reconstruction. STINR mapped the unknown image and the evolution of its motion into spatial and temporal multi-layer perceptrons (MLPs), and iteratively optimized the neuron weightings of the MLPs via acquired projections to represent the dynamic CBCT series. In addition to the MLPs, we also introduced prior knowledge, in the form of principal component analysis (PCA)-based patient-specific motion models, to reduce the complexity of the temporal mapping to address the ill-conditioned dynamic CBCT reconstruction problem. We used the extended-cardiac-torso (XCAT) phantom and a patient 4D-CBCT dataset to simulate different lung motion scenarios to evaluate STINR. The scenarios contain motion variations including motion baseline shifts, motion amplitude/frequency variations, and motion non-periodicity. The XCAT scenarios also contain inter-scan anatomical variations including tumor shrinkage and tumor position change.Main results. STINR shows consistently higher image reconstruction and motion tracking accuracy than a traditional PCA-based method and a polynomial-fitting-based neural representation method. STINR tracks the lung target to an average center-of-mass error of 1-2 mm, with corresponding relative errors of reconstructed dynamic CBCTs around 10%.Significance. STINR offers a general framework allowing accurate dynamic CBCT reconstruction for image-guided radiotherapy. It is a one-shot learning method that does not rely on pre-training and is not susceptible to generalizability issues. It also allows natural super-resolution. It can be readily applied to other imaging modalities as well.


Subject(s)
Lung Neoplasms , Lung , Humans , Motion , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/radiotherapy , Phantoms, Imaging , Cone-Beam Computed Tomography/methods , Algorithms , Image Processing, Computer-Assisted/methods , Four-Dimensional Computed Tomography/methods
12.
Med Phys ; 50(7): 4399-4414, 2023 Jul.
Article in English | MEDLINE | ID: mdl-36698291

ABSTRACT

BACKGROUND: MR scans used in radiotherapy can be partially truncated due to the limited field of view (FOV), affecting dose calculation accuracy in MR-based radiation treatment planning. PURPOSE: We proposed a novel Compensation-cycleGAN (Comp-cycleGAN) by modifying the cycle-consistent generative adversarial network (cycleGAN), to simultaneously create synthetic CT (sCT) images and compensate the missing anatomy from the truncated MR images. METHODS: Computed tomography (CT) and T1 MR images with complete anatomy of 79 head-and-neck patients were used for this study. The original MR images were manually cropped 10-25 mm off at the posterior head to simulate clinically truncated MR images. Fifteen patients were randomly chosen for testing and the rest of the patients were used for model training and validation. Both the truncated and original MR images were used in the Comp-cycleGAN training stage, which enables the model to compensate for the missing anatomy by learning the relationship between the truncation and known structures. After the model was trained, sCT images with complete anatomy can be generated by feeding only the truncated MR images into the model. In addition, the external body contours acquired from the CT images with full anatomy could be an optional input for the proposed method to leverage the additional information of the actual body shape for each test patient. The mean absolute error (MAE) of Hounsfield units (HU), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM) were calculated between sCT and real CT images to quantify the overall sCT performance. To further evaluate the shape accuracy, we generated the external body contours for sCT and original MR images with full anatomy. The Dice similarity coefficient (DSC) and mean surface distance (MSD) were calculated between the body contours of sCT and original MR images for the truncation region to assess the anatomy compensation accuracy. RESULTS: The average MAE, PSNR, and SSIM calculated over test patients were 93.1 HU/91.3 HU, 26.5 dB/27.4 dB, and 0.94/0.94 for the proposed Comp-cycleGAN models trained without/with body-contour information, respectively. These results were comparable with those obtained from the cycleGAN model which is trained and tested on full-anatomy MR images, indicating the high quality of the sCT generated from truncated MR images by the proposed method. Within the truncated region, the mean DSC and MSD were 0.85/0.89 and 1.3/0.7 mm for the proposed Comp-cycleGAN models trained without/with body contour information, demonstrating good performance in compensating the truncated anatomy. CONCLUSIONS: We developed a novel Comp-cycleGAN model that can effectively create sCT with complete anatomy compensation from truncated MR images, which could potentially benefit the MRI-based treatment planning.


Subject(s)
Image Processing, Computer-Assisted , Tomography, X-Ray Computed , Humans , Image Processing, Computer-Assisted/methods , Radionuclide Imaging , Magnetic Resonance Imaging/methods , Radiotherapy Planning, Computer-Assisted/methods
13.
Front Cardiovasc Med ; 9: 1071701, 2022.
Article in English | MEDLINE | ID: mdl-36531700

ABSTRACT

Introduction: Chemoradiotherapy (CRT) has been associated with increased incidence of cardiovascular (CV) adverse events (CVAE). Coronary artery calcium scoring (CAC) has shown to predict coronary events beyond the traditional CV risk factors. This study examines whether CAC, measured on standard of care, non-contrast chest CT (NCCT) imaging, predicts the development of CVAE in patients with non-small cell lung cancer (NSCLC) treated with CRT. Methods: Patients with NSCLC treated with CRT at MD Anderson Cancer Center from 7/2009 until 4/2014 and who had at least one NCCT scan within 6 months from their first CRT were identified. CAC scoring was performed on NCCT scans by an expert cardiologist and a cardiac radiologist following the 2016 SCCT/STR guidelines. CVAE were graded based on the most recent Common Terminology Criteria for Adverse Events (CTCAE) version 5.0. CVAE were also grouped into (i) coronary/vascular events, (ii) arrhythmias, or (iii) heart failure. All CVAE were adjudicated by a board-certified cardiologist. Results: Out of a total of 193 patients, 45% were female and 91% Caucasian. Mean age was 64 ± 9 years and mean BMI 28 ± 6 kg/m2. Of 193 patients, 74% had CAC >0 Agatston units (AU), 49% CAC ≥100 AU and 36% CAC ≥300 AU. Twenty-nine patients (15%) developed a grade ≥2 CVAE during a median follow-up of 24.3 months (IQR: 10.9-51.7). Of those, 11 (38%) were coronary/vascular events. In the multivariate cox regression analysis, controlling for mean heart dose and pre-existing CV disease, higher CAC score was independently associated with development of a grade ≥2 CVAE [HR: 1.04 (per 100 AU), 95% CI: 1.01-1.08, p = 0.022] and with worse overall survival (OS; CAC ≥100 vs. <100 AU, HR: 1.64, 95% CI: 1.11-2.44, p = 0.013). In a sub-analysis evaluating the type of the CVAE, it was the coronary/vascular events that were significantly associated with higher baseline CAC (median: 676 AU vs. 73 AU, p = 0.035). Discussion: Cardiovascular adverse events are frequent in patients with NSCLC treated with CRT. CAC calculated on "standard of care" NCCT can predict the development of CVAEs and specifically coronary/vascular events, as well as OS, independently from other traditional risk factors and radiation mean heart dose. Clinical trial registration: [https://clinicaltrials.gov/ct2/show/NCT00915005], identifier [NCT00915005].

15.
Ann Surg Oncol ; 29(12): 7473-7482, 2022 Nov.
Article in English | MEDLINE | ID: mdl-35789301

ABSTRACT

BACKGROUND: High-grade adenocarcinoma subtypes (micropapillary and solid) treated with sublobar resection have an unfavorable prognosis compared with those treated with lobectomy. We investigated the potential of incorporating solid attenuation component masks with deep learning in the prediction of high-grade components to optimize surgical strategy preoperatively. METHODS: A total of 502 patients with pathologically confirmed high-grade adenocarcinomas were retrospectively enrolled between 2016 and 2020. The SACs attention DL model was developed to apply solid-attenuation-component-like subregion masks (tumor area ≥ - 190 HU) to guide the DL model for predicting high-grade subtypes. The SACA-DL was assessed using 5-fold cross-validation and external validation in the training and testing sets, respectively. The performance, which was evaluated using the area under the curve (AUC), was compared between SACA-DL and the DL model without SACs attention (DLwoSACs), the prior radiomics model, or the model based on the consolidation/tumor (C/T) diameter ratio. RESULTS: We classified 313 and 189 patients into training and testing cohorts, respectively. The SACA-DL achieved an AUC of 0.91 for the cross-validation, which was significantly superior to those of the DLwoSACs (AUC = 0.88; P = 0.02), prior radiomics model (AUC = 0.85; P = 0.004), and C/T ratio (AUC = 0.84; P = 0.002). An AUC of 0.93 was achieved for external validation in the SACA-DL and was significantly better than those of the DLwoSACs (AUC = 0.89; P = 0.04), prior radiomics model (AUC = 0.85; P < 0.001), and C/T ratio (AUC = 0.85; P < 0.001). CONCLUSIONS: The combination of solid-attenuation-component-like subregion masks with the DL model is a promising approach for the preoperative prediction of high-grade adenocarcinoma subtypes.


Subject(s)
Adenocarcinoma of Lung , Adenocarcinoma , Deep Learning , Lung Neoplasms , Adenocarcinoma/diagnostic imaging , Adenocarcinoma/pathology , Adenocarcinoma/surgery , Adenocarcinoma of Lung/diagnostic imaging , Adenocarcinoma of Lung/pathology , Adenocarcinoma of Lung/surgery , Attention , Humans , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/pathology , Lung Neoplasms/surgery , Retrospective Studies , Tomography, X-Ray Computed/methods
16.
Med Phys ; 49(11): 7085-7094, 2022 Nov.
Article in English | MEDLINE | ID: mdl-35766454

ABSTRACT

BACKGROUND: Respiratory motion correction is of importance in studies of coronary plaques employing 18 F-NaF; however, the validation of motion correction techniques mainly relies on indirect measures such as test-retest repeatability assessments. In this study, we aim to compare and, thus, validate the respiratory motion vector fields obtained from the positron emission tomography (PET) images directly to the respiratory motion observed during four-dimensional cine-computed tomography (CT) by an expert observer. PURPOSE: To investigate the accuracy of the motion correction employed in a software (FusionQuant) used for evaluation of 18 F-NaF PET studies by comparing the respiratory motion of the coronary plaques observed in PET to the respiratory motion observed in 4D cine-CT images. METHODS: This study included 23 patients who undertook thoracic PET scans for the assessment of coronary plaques using 18 F-sodium fluoride (18 F-NaF). All patients underwent a 5-s cine-CT (4D-CT), a coronary CT angiography (CTA), and 18 F-NaF PET. The 4D-CT and PET scan were reconstructed into 10 phases. Respiratory motion was estimated for the non-contrast visible coronary plaques using diffeomorphic registrations (PET) and compared to respiratory motion observed on 4D-CT. We report the PET motion vector fields obtained in the three principal axes in addition to the 3D motion. Statistical differences were examined using paired t-tests. Signal-to-noise ratios (SNR) are reported for the single-phase images (end-expiratory phase) and for the motion-corrected image-series (employing the motion vector fields extracted during the diffeomorphic registrations). RESULTS: In total, 19 coronary plaques were identified in 16 patients. No statistical differences were observed for the maximum respiratory motion observed in x, y, and the 3D motion fields (magnitude and direction) between the CT and PET (X direction: 4D CT = 2.5 ± 1.5 mm, PET = 2.4 ± 3.2 mm; Y direction: 4D CT = 2.3 ± 1.9 mm, PET = 0.7 ± 2.9 mm, 3D motion: 4D CT = 6.6 ± 3.1 mm, PET = 5.7 ± 2.6 mm, all p ≥ 0.05). Significant differences in respiratory motion were observed in the systems' Z direction: 4D CT = 4.9 ± 3.4 mm, PET = 2.3 ± 3.2 mm, p = 0.04. Significantly improved SNR is reported for the motion corrected images compared to the end-expiratory phase images (end-expiratory phase = 6.8±4.8, motion corrected = 12.2±4.5, p = 0.001). CONCLUSION: Similar respiratory motion was observed in two directions and 3D for coronary plaques on 4D CT as detected by automatic respiratory motion correction of coronary PET using FusionQuant. The respiratory motion correction technique significantly improved the SNR in the images.


Subject(s)
Four-Dimensional Computed Tomography , Sodium Fluoride , Humans , Sodium , Positron-Emission Tomography
17.
Med Phys ; 49(5): 2979-2994, 2022 May.
Article in English | MEDLINE | ID: mdl-35235216

ABSTRACT

PURPOSE: In some noisy low dose CT lung cancer screening images, we noticed that the CT density values of air were increased and the visibility of emphysema was distinctly decreased. By examining histograms of these images, we found that the CT density values were truncated at -1024 HU. The purpose of this study was to investigate the effect of pixel value truncation on the visibility of emphysema using mathematical models. METHODS AND MATERIALS: Assuming CT noise follows a normal distribution, we derived the relationship between the mean CT density value and the standard deviation (SD) when the pixel values below -1024 HU are truncated and replaced by -1024 HU. To validate our mathematical model, 20 untruncated phantom CT images were truncated by simulation, and the mean CT density values and SD of air in the images were measured and compared with the theoretical values. In addition, the mean CT density values and SD of air were measured in 100 cases of real clinical images obtained by GE, Siemens, and Philips scanners, respectively, and the agreement with the theoretical values was examined. Next, the contrast-to-noise ratio (CNR) between air (-1000 HU) and lung parenchyma (-850 HU) was derived from the mathematical model in the presence and absence of truncation as a measure of the visibility of emphysema. In addition, the radiation dose ratios required to obtain the same CNR in the case with and without truncation were also calculated. RESULTS: The mathematical model revealed that when the pixel values are truncated, the mean CT density values are proportional to the noise magnitude when the magnitude exceeds a certain level. The mean CT density values and SD measured in the images with pixel values truncated by simulation and in the real clinical images acquired by GE and Philips scanners agreed well with the theoretical values from our mathematical model. In the Siemens images, the measured and theoretical values agreed well when a portion of the truncated values were replaced by random values instead of simply replacing by -1024 HU. The CNR of air and lung parenchyma was lowered by truncating CT density values compared to that of no truncation. Furthermore, it was found that higher radiation dose was required to obtain the same CNR with truncation as without. As an example, when the noise SD was 60 HU, the radiation dose required for the GE and Philips truncation method was about 1.2 times higher than that without truncation, and that for the Siemens truncation method was about 1.4 times higher. CONCLUSIONS: It was demonstrated mathematically that pixel value truncation causes a brightening of the mean CT density value and decreases the CNR of emphysema. Our results indicate that it is advisable to turn off truncation at -1024 HU, especially when scanning at low and ultra-low radiation doses in the thorax.


Subject(s)
Emphysema , Lung Neoplasms , Pulmonary Emphysema , Early Detection of Cancer , Humans , Lung Neoplasms/diagnostic imaging , Phantoms, Imaging , Radiation Dosage , Thorax , Tomography, X-Ray Computed/methods
18.
Med Phys ; 49(6): 3597-3611, 2022 Jun.
Article in English | MEDLINE | ID: mdl-35324002

ABSTRACT

BACKGROUND: The accuracy of positron emission tomography (PET) quantification and localization can be compromised if a misregistered computed tomography (CT) is used for attenuation correction (AC) in PET/CT. As data-driven gating (DDG) continues to grow in clinical use, these issues are becoming more relevant with respect to solutions for gated CT. PURPOSE: In this work, a new automated DDG CT method was developed to provide average CT and DDG CT for AC of PET and DDG PET, respectively. METHODS: An automatic DDG CT was developed to provide the end-expiratory (EE) and end-inspiratory (EI) phases of images from low-dose cine CT images, with all phases being averaged to generate an average CT. The respiratory phases of EE and EI were determined according to lung region Hounsfield unit (HU) values and body outline contours. The average CT was used for AC of baseline PET and DDG CT at EE phase was used for AC of DDG PET at the quiescent or EE phase. The EI and EE phases obtained with DDG CT were used for assessing the magnitude of respiratory motion. The proposed DDG CT was compared to two commercial CT gating methods: (1) 4D CT (external device based) and (2) D4D CT (DDG based) in 38 patient datasets with respect to respiratory phase image selection, lung HU, lung volume, and image artifacts. In a separate set of twenty consecutive PET/CT studies containing a mix of 18 F-FDG, 68 Ga-Dotatate, and 64 Cu-Dotatate scans, the proposed DDG CT was compared with D4D CT for impacts on registration and quantification in DDG PET/CT. RESULTS: In the EE phase, the images selected by DDG CT and 4D CT were identical 62.5% ± 21.6% of the time, whereas DDG CT and D4D CT were 6.5% ± 9.7%, and 4D CT and D4D CT were 8.6% ± 12.2%. These differences in EE phase image selection were significant (p < 0.0001). In the EI phase, the images selected by DDG CT and 4D CT were identical 68.2% ± 18.9% of the time, DDG CT and D4D CT were 63.9% ± 18.8%, and 4D CT and D4D CT were 61.2% ± 19.8%. These differences were not significant. The mean lung HU and volumes were not statistically different (p > 0.1) among the three methods. In some studies, DDG CT was better than D4D or 4D CT in the appropriate selection of the EE and EI phases, and D4D CT was found to reverse the EE and EI phases or not select the correct images by visual inspection. A statistically significant improvement of DDG CT over D4D CT for AC of DDG PET was also demonstrated with PET quantification analysis. When irregular breath cycles were present in the cine CT, DDG CT could be used to replace average CT for the improved AC of baseline PET. CONCLUSION: A new automatic DDG CT was developed to tackle the issues of misregistration and tumor motion in PET/CT imaging. DDG CT was significantly more consistent than D4D CT in selecting the EE phase images as the clinical standard of 4D CT. When compared to both commercial gated CT methods of 4D CT and D4D CT, DDG CT appeared to be more robust in the lower lung and upper diaphragm regions where misregistration and tumor motion often occur. DDG CT offered improved AC for DDG PET relative to D4D CT. In cases with irregular respiratory motion, DDG CT improved AC over average CT for baseline PET. The new DDG CT provides the benefits of 4D CT without the need for external device gating.


Subject(s)
Four-Dimensional Computed Tomography , Positron Emission Tomography Computed Tomography , Four-Dimensional Computed Tomography/methods , Humans , Motion , Positron-Emission Tomography/methods , Radionuclide Imaging
19.
Phys Med Biol ; 67(8)2022 04 08.
Article in English | MEDLINE | ID: mdl-35313286

ABSTRACT

Objective. Data-driven gating (DDG) can address patient motion issues and enhance PET quantification but suffers from increased image noise from utilization of <100% of PET data. Misregistration between DDG-PET and CT may also occur, altering the potential benefits of gating. Here, the effects of PET acquisition time and CT misregistration were assessed with a combined DDG-PET/DDG-CT technique.Approach. In the primary PET bed with lesions of interest and likely respiratory motion effects, PET acquisition time was extended to 12 min and a low-dose cine CT was acquired to enable DDG-CT. Retrospective reconstructions were created for both non-gated (NG) and DDG-PET using 30 s to 12 min of PET data. Both the standard helical CT and DDG-CT were used for attenuation correction of DDG-PET data. SUVmax, SUVpeak, and CNR were compared for 45 lesions in the liver and lung from 27 cases.Main results. For both NG-PET (p= 0.0041) and DDG-PET (p= 0.0028), only the 30 s acquisition time showed clear SUVmaxbias relative to the 3 min clinical standard. SUVpeakshowed no bias at any change in acquisition time. DDG-PET alone increased SUVmaxby 15 ± 20% (p< 0.0001), then was increased further by an additional 15 ± 29% (p= 0.0007) with DDG-PET/CT. Both 3 min and 6 min DDG-PET had lesion CNR statistically equivalent to 3 min NG-PET, but then increased at 12 min by 28 ± 48% (p= 0.0022). DDG-PET/CT at 6 min had comparable counts to 3 min NG-PET, but significantly increased CNR by 39 ± 46% (p< 0.0001).Significance. 50% counts DDG-PET did not lead to inaccurate or biased SUV-increased SUV resulted from gating. Improved registration from DDG-CT was equally as important as motion correction with DDG-PET for increasing SUV in DDG-PET/CT. Lesion detectability could be significantly improved when DDG-PET used equivalent counts to NG-PET, but only when combined with DDG-CT in DDG-PET/CT.


Subject(s)
Positron Emission Tomography Computed Tomography , Respiratory-Gated Imaging Techniques , Humans , Motion , Positron Emission Tomography Computed Tomography/methods , Positron-Emission Tomography/methods , Respiratory-Gated Imaging Techniques/methods , Retrospective Studies , Tomography, X-Ray Computed
20.
Clin Nucl Med ; 47(3): 209-218, 2022 Mar 01.
Article in English | MEDLINE | ID: mdl-35020640

ABSTRACT

PURPOSE: The aim of this study was to develop a pretherapy PET/CT-based prediction model for treatment response to ibrutinib in lymphoma patients. PATIENTS AND METHODS: One hundred sixty-nine lymphoma patients with 2441 lesions were studied retrospectively. All eligible lymphomas on pretherapy 18F-FDG PET images were contoured and segmented for radiomic analysis. Lesion- and patient-based responsiveness to ibrutinib was determined retrospectively using the Lugano classification. PET radiomic features were extracted. A radiomic model was built to predict ibrutinib response. The prognostic significance of the radiomic model was evaluated independently in a test cohort and compared with conventional PET metrics: SUVmax, metabolic tumor volume, and total lesion glycolysis. RESULTS: The radiomic model had an area under the receiver operating characteristic curve (ROC AUC) of 0.860 (sensitivity, 92.9%, specificity, 81.4%; P < 0.001) for predicting response to ibrutinib, outperforming the SUVmax (ROC AUC, 0.519; P = 0.823), metabolic tumor volume (ROC AUC, 0.579; P = 0.412), total lesion glycolysis (ROC AUC, 0.576; P = 0.199), and a composite model built using all 3 (ROC AUC, 0.562; P = 0.046). The radiomic model increased the probability of accurately predicting ibrutinib-responsive lesions from 84.8% (pretest) to 96.5% (posttest). At the patient level, the model's performance (ROC AUC = 0.811; P = 0.007) was superior to that of conventional PET metrics. Furthermore, the radiomic model showed robustness when validated in treatment subgroups: first (ROC AUC, 0.916; P < 0.001) versus second or greater (ROC AUC, 0.842; P < 0.001) line of defense and single treatment (ROC AUC, 0.931; P < 0.001) versus multiple treatments (ROC AUC, 0.824; P < 0.001). CONCLUSIONS: We developed and validated a pretherapy PET-based radiomic model to predict response to treatment with ibrutinib in a diverse cohort of lymphoma patients.


Subject(s)
Fluorodeoxyglucose F18 , Lymphoma , Adenine/analogs & derivatives , Humans , Lymphoma/diagnostic imaging , Lymphoma/drug therapy , Piperidines , Positron Emission Tomography Computed Tomography , Retrospective Studies
SELECTION OF CITATIONS
SEARCH DETAIL
...