Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
1.
Eur J Nucl Med Mol Imaging ; 51(9): 2532-2546, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38696130

ABSTRACT

PURPOSE: To improve reproducibility and predictive performance of PET radiomic features in multicentric studies by cycle-consistent generative adversarial network (GAN) harmonization approaches. METHODS: GAN-harmonization was developed to harmonize whole-body PET scans to perform image style and texture translation between different centers and scanners. GAN-harmonization was evaluated by application to two retrospectively collected open datasets and different tasks. First, GAN-harmonization was performed on a dual-center lung cancer cohort (127 female, 138 male) where the reproducibility of radiomic features in healthy liver tissue was evaluated. Second, GAN-harmonization was applied to a head and neck cancer cohort (43 female, 154 male) acquired from three centers. Here, the clinical impact of GAN-harmonization was analyzed by predicting the development of distant metastases using a logistic regression model incorporating first-order statistics and texture features from baseline 18F-FDG PET before and after harmonization. RESULTS: Image quality remained high (structural similarity: left kidney ≥ 0.800, right kidney ≥ 0.806, liver ≥ 0.780, lung ≥ 0.838, spleen ≥ 0.793, whole-body ≥ 0.832) after image harmonization across all utilized datasets. Using GAN-harmonization, inter-site reproducibility of radiomic features in healthy liver tissue increased at least by ≥ 5 ± 14% (first-order), ≥ 16 ± 7% (GLCM), ≥ 19 ± 5% (GLRLM), ≥ 16 ± 8% (GLSZM), ≥ 17 ± 6% (GLDM), and ≥ 23 ± 14% (NGTDM). In the head and neck cancer cohort, the outcome prediction improved from AUC 0.68 (95% CI 0.66-0.71) to AUC 0.73 (0.71-0.75) by application of GAN-harmonization. CONCLUSIONS: GANs are capable of performing image harmonization and increase reproducibility and predictive performance of radiomic features derived from different centers and scanners.


Subject(s)
Image Processing, Computer-Assisted , Positron-Emission Tomography , Humans , Female , Male , Image Processing, Computer-Assisted/methods , Positron-Emission Tomography/standards , Positron-Emission Tomography/methods , Lung Neoplasms/diagnostic imaging , Middle Aged , Reproducibility of Results , Head and Neck Neoplasms/diagnostic imaging , Retrospective Studies , Fluorodeoxyglucose F18 , Aged
2.
Sci Rep ; 11(1): 8838, 2021 04 23.
Article in English | MEDLINE | ID: mdl-33893323

ABSTRACT

A prototype of a navigation system to fuse two image modalities is presented. The standard inter-modality registration is replaced with a tracker-based image registration of calibrated imaging devices. Intra-procedure transrectal US (TRUS) images were merged with pre-procedure magnetic resonance (MR) images for prostate biopsy. The registration between MR and TRUS images was performed by an additional abdominal 3D-US (ab-3D-US), which enables replacing the inter-modal MR/TRUS registration by an intra-modal ab-3D-US/3D-TRUS registration. Calibration procedures were carried out using an optical tracking system (OTS) for the pre-procedure image fusion of the ab-3D-US with the MR. Inter-modal ab-3D-US/MR image fusion was evaluated using a multi-cone phantom for the target registration error (TRE) and a prostate phantom for the Dice score and the Hausdorff distance of lesions . Finally, the pre-procedure ab- 3D-US was registered with the TRUS images and the errors for the transformation from the MR to the TRUS were determined. The TRE of the ab-3D-US/MR image registration was 1.81 mm. The Dice-score and the Hausdorff distance for ab-3D-US and MR were found to be 0.67 and 3.19 mm. The Dice score and the Hausdorff distance for TRUS and MR were 0.67 and 3.18 mm. The hybrid navigation system showed sufficient accuracy for fusion guided biopsy procedures with prostate phantoms. The system might provide intra-procedure fusion for most US-guided biopsy and ablation interventions.

3.
J Nucl Med ; 62(6): 871-879, 2021 06 01.
Article in English | MEDLINE | ID: mdl-33246982

ABSTRACT

This work set out to develop a motion-correction approach aided by conditional generative adversarial network (cGAN) methodology that allows reliable, data-driven determination of involuntary subject motion during dynamic 18F-FDG brain studies. Methods: Ten healthy volunteers (5 men/5 women; mean age ± SD, 27 ± 7 y; weight, 70 ± 10 kg) underwent a test-retest 18F-FDG PET/MRI examination of the brain (n = 20). The imaging protocol consisted of a 60-min PET list-mode acquisition contemporaneously acquired with MRI, including MR navigators and a 3-dimensional time-of-flight MR angiography sequence. Arterial blood samples were collected as a reference standard representing the arterial input function (AIF). Training of the cGAN was performed using 70% of the total datasets (n = 16, randomly chosen), which was corrected for motion using MR navigators. The resulting cGAN mappings (between individual frames and the reference frame [55-60 min after injection]) were then applied to the test dataset (remaining 30%, n = 6), producing artificially generated low-noise images from early high-noise PET frames. These low-noise images were then coregistered to the reference frame, yielding 3-dimensional motion vectors. Performance of cGAN-aided motion correction was assessed by comparing the image-derived input function (IDIF) extracted from a cGAN-aided motion-corrected dynamic sequence with the AIF based on the areas under the curves (AUCs). Moreover, clinical relevance was assessed through direct comparison of the average cerebral metabolic rates of glucose (CMRGlc) values in gray matter calculated using the AIF and the IDIF. Results: The absolute percentage difference between AUCs derived using the motion-corrected IDIF and the AIF was (1.2% + 0.9%). The gray matter CMRGlc values determined using these 2 input functions differed by less than 5% (2.4% + 1.7%). Conclusion: A fully automated data-driven motion-compensation approach was established and tested for 18F-FDG PET brain imaging. cGAN-aided motion correction enables the translation of noninvasive clinical absolute quantification from PET/MR to PET/CT by allowing the accurate determination of motion vectors from the PET data itself.


Subject(s)
Brain/diagnostic imaging , Fluorodeoxyglucose F18 , Image Processing, Computer-Assisted/methods , Movement , Neural Networks, Computer , Positron-Emission Tomography , Humans , Magnetic Resonance Imaging
4.
PLoS One ; 15(3): e0229441, 2020.
Article in English | MEDLINE | ID: mdl-32214326

ABSTRACT

PURPOSE: In this paper we compared two different 3D ultrasound (US) modes (3D free-hand mode and 3D wobbler mode) to see which is more suitable to perform the 3D-US/3D-US registration for clinical guidance applications. The typical errors with respect to their impact on the final localization error were evaluated step by step. METHODS: Multi-point target and Hand-eye calibration methods were used for 3D US calibration together with a newly designed multi-cone phantom. Pointer based and image based methods were used for 2D US calibration. The calibration target error was computed by using a different multi-cone phantom. An egg-shaped phantom was used as ground truth to compare distortions for both 3D modes along with the measurements of the volume. Finally, we compared 3D ultrasound images acquired by 3D wobbler mode and 3D free-hand mode with respect to their 3D-US/3D-US registration accuracy using both, phantom and patient data. A theoretical step by step error analysis was performed and compared to empirical data. RESULTS: Target registration errors based on the calibration with the 3D Multi-point and 2D pointer/image method have been found to be comparable (∼1mm). They both outperformed the 3D Hand-eye method (error >2mm). Volume measurements with the 3D free-hand mode were closest to the ground truth (around 6% error compared to 9% with the 3D wobbler mode). Additional scans on phantoms showed a 3D-US/3D-US registration error below 1 mm for both, the 3D free-hand mode and the 3D wobbler mode, respectively. Results with patient data showed greater error with the 3D free-hand mode (6.50mm - 13.37mm) than with the 3D wobbler mode (2.99 ± 1.54 mm). All the measured errors were found to be in accordance to their theoretical upper bounds. CONCLUSION: While both 3D volume methods showed comparable results with respect to 3D-US/3D-US registration for phantom images, for patient data registrations the 3D wobbler mode is superior to the 3D free-hand mode. The effect of all error sources could be estimated by theoretical derivations.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Phantoms, Imaging , Prostate/diagnostic imaging , Tomography, X-Ray Computed/methods , Ultrasonography/methods , Calibration , Humans , Male , Models, Theoretical
SELECTION OF CITATIONS
SEARCH DETAIL