Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 328
1.
J Appl Clin Med Phys ; : e14389, 2024 May 22.
Article En | MEDLINE | ID: mdl-38778565

PURPOSE: The aim of this study was to compare the organ doses assessed through a digital phantom-based and a patient specific-based dosimetric tool in adult routine thorax computed tomography (CT) examinations with reference to physical dose measurements performed in anthropomorphic phantoms. METHODS: Two Monte Carlo based dose calculation tools were used to assess organ doses in routine adult thorax CT examinations. These were a digital phantom-based dosimetry tool (NCICT, National Cancer Institute, USA) and a patient-specific individualized dosimetry tool (ImpactMC, CT Imaging GmbH, Germany). Digital phantoms and patients were classified in four groups according to their water equivalent diameter (Dw). Normalized to volume computed tomography dose index (CTDIvol), organ dose was assessed for lungs, esophagus, heart, breast, active bone marrow, and skin. Organ doses were compared to measurements performed using thermoluminescent detectors (TLDs) in two physical anthropomorphic phantoms that simulate the average adult individual as a male (Alderson Research Labs, USA) and as a female (ATOM Phantoms, USA). RESULTS: The average percent difference of NCICT to TLD and ImpactMC to TLD dose measurements across all organs in both sexes was 13% and 6%, respectively. The average ± 1 standard deviation in dose values across all organs with NCICT, ImpactMC, and TLDs was ± 0.06 (mGy/mGy), ± 0.19 (mGy/mGy), and ± 0.13 (mGy/mGy), respectively. Organ doses decreased with increasing Dw in both NCICT and ImpactMC. CONCLUSION: Organ doses estimated with ImpactMC were in closer agreement to TLDs compared to NCICT. This may be attributed to the inherent property of ImpactMC methodology to generate phantoms that resemble the realistic anatomy of the examined patient as opposed to NCICT methodology that incorporates an anatomical discrepancy between phantoms and patients.

2.
BJR Open ; 6(1): tzae009, 2024 Jan.
Article En | MEDLINE | ID: mdl-38798693
3.
Phys Med Biol ; 69(11)2024 May 30.
Article En | MEDLINE | ID: mdl-38744305

This review casts a spotlight on intraoperative positron emission tomography (PET) scanners and the distinctive challenges they confront. Specifically, these systems contend with the necessity of partial coverage geometry, essential for ensuring adequate access to the patient. This inherently leans them towards limited-angle PET imaging, bringing along its array of reconstruction and geometrical sensitivity challenges. Compounding this, the need for real-time imaging in navigation systems mandates rapid acquisition and reconstruction times. For these systems, the emphasis is on dependable PET image reconstruction (without significant artefacts) while rapid processing takes precedence over the spatial resolution of the system. In contrast, specimen PET imagers are unburdened by the geometrical sensitivity challenges, thanks to their ability to leverage full coverage PET imaging geometries. For these devices, the focus shifts: high spatial resolution imaging takes precedence over rapid image reconstruction. This review concurrently probes into the technical complexities of both intraoperative and specimen PET imaging, shedding light on their recent designs, inherent challenges, and technological advancements.


Image Processing, Computer-Assisted , Operating Rooms , Positron-Emission Tomography , Positron-Emission Tomography/instrumentation , Humans , Image Processing, Computer-Assisted/methods
4.
Med Phys ; 2024 Apr 17.
Article En | MEDLINE | ID: mdl-38629779

BACKGROUND: Contrast-enhanced computed tomography (CECT) provides much more information compared to non-enhanced CT images, especially for the differentiation of malignancies, such as liver carcinomas. Contrast media injection phase information is usually missing on public datasets and not standardized in the clinic even in the same region and language. This is a barrier to effective use of available CECT images in clinical research. PURPOSE: The aim of this study is to detect contrast media injection phase from CT images by means of organ segmentation and machine learning algorithms. METHODS: A total number of 2509 CT images split into four subsets of non-contrast (class #0), arterial (class #1), venous (class #2), and delayed (class #3) after contrast media injection were collected from two CT scanners. Seven organs including the liver, spleen, heart, kidneys, lungs, urinary bladder, and aorta along with body contour masks were generated by pre-trained deep learning algorithms. Subsequently, five first-order statistical features including average, standard deviation, 10, 50, and 90 percentiles extracted from the above-mentioned masks were fed to machine learning models after feature selection and reduction to classify the CT images in one of four above mentioned classes. A 10-fold data split strategy was followed. The performance of our methodology was evaluated in terms of classification accuracy metrics. RESULTS: The best performance was achieved by Boruta feature selection and RF model with average area under the curve of more than 0.999 and accuracy of 0.9936 averaged over four classes and 10 folds. Boruta feature selection selected all predictor features. The lowest classification was observed for class #2 (0.9888), which is already an excellent result. In the 10-fold strategy, only 33 cases from 2509 cases (∼1.4%) were misclassified. The performance over all folds was consistent. CONCLUSIONS: We developed a fast, accurate, reliable, and explainable methodology to classify contrast media phases which may be useful in data curation and annotation in big online datasets or local datasets with non-standard or no series description. Our model containing two steps of deep learning and machine learning may help to exploit available datasets more effectively.

5.
Ann Nucl Med ; 2024 Apr 04.
Article En | MEDLINE | ID: mdl-38575814

PURPOSE: This study aimed to examine the robustness of positron emission tomography (PET) radiomic features extracted via different segmentation methods before and after ComBat harmonization in patients with non-small cell lung cancer (NSCLC). METHODS: We included 120 patients (positive recurrence = 46 and negative recurrence = 74) referred for PET scanning as a routine part of their care. All patients had a biopsy-proven NSCLC. Nine segmentation methods were applied to each image, including manual delineation, K-means (KM), watershed, fuzzy-C-mean, region-growing, local active contour (LAC), and iterative thresholding (IT) with 40, 45, and 50% thresholds. Diverse image discretizations, both without a filter and with different wavelet decompositions, were applied to PET images. Overall, 6741 radiomic features were extracted from each image (749 radiomic features from each segmented area). Non-parametric empirical Bayes (NPEB) ComBat harmonization was used to harmonize the features. Linear Support Vector Classifier (LinearSVC) with L1 regularization For feature selection and Support Vector Machine classifier (SVM) with fivefold nested cross-validation was performed using StratifiedKFold with 'n_splits' set to 5 to predict recurrence in NSCLC patients and assess the impact of ComBat harmonization on the outcome. RESULTS: From 749 extracted radiomic features, 206 (27%) and 389 (51%) features showed excellent reliability (ICC ≥ 0.90) against segmentation method variation before and after NPEB ComBat harmonization, respectively. Among all, 39 features demonstrated poor reliability, which declined to 10 after ComBat harmonization. The 64 fixed bin widths (without any filter) and wavelets (LLL)-based radiomic features set achieved the best performance in terms of robustness against diverse segmentation techniques before and after ComBat harmonization. The first-order and GLRLM and also first-order and NGTDM feature families showed the largest number of robust features before and after ComBat harmonization, respectively. In terms of predicting recurrence in NSCLC, our findings indicate that using ComBat harmonization can significantly enhance machine learning outcomes, particularly improving the accuracy of watershed segmentation, which initially had fewer reliable features than manual contouring. Following the application of ComBat harmonization, the majority of cases saw substantial increase in sensitivity and specificity. CONCLUSION: Radiomic features are vulnerable to different segmentation methods. ComBat harmonization might be considered a solution to overcome the poor reliability of radiomic features.

6.
Clin Genitourin Cancer ; 22(3): 102076, 2024 Mar 13.
Article En | MEDLINE | ID: mdl-38593599

The objective of this work was to review comparisons of the efficacy of 68Ga-PSMA-11 (prostate-specific membrane antigen) PET/CT and multiparametric magnetic resonance imaging (mpMRI) in the detection of prostate cancer among patients undergoing initial staging prior to radical prostatectomy or experiencing recurrent prostate cancer, based on histopathological data. A comprehensive search was conducted in PubMed and Web of Science, and relevant articles were analyzed with various parameters, including year of publication, study design, patient count, age, PSA (prostate-specific antigen) value, Gleason score, standardized uptake value (SUVmax), detection rate, treatment history, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and PI-RADS (prostate imaging reporting and data system) scores. Only studies directly comparing PSMA-PET and mpMRI were considered, while those examining combined accuracy or focusing on either modality alone were excluded. In total, 24 studies comprising 1717 patients were analyzed, with the most common indication for screening being staging, followed by relapse. The findings indicated that 68Ga-PSMA-PET/CT effectively diagnosed prostate cancer in patients with suspected or confirmed disease, and both methods exhibited comparable efficacy in identifying lesion-specific information. However, notable heterogeneity was observed, highlighting the necessity for standardization of imaging and histopathology systems to mitigate inter-study variability. Future research should prioritize evaluating the combined diagnostic performance of both modalities to enhance sensitivity and reduce unnecessary biopsies. Overall, the utilization of PSMA-PET and mpMRI in combination holds substantial potential for significantly advancing the diagnosis and management of prostate cancer.

7.
Phys Med ; 121: 103357, 2024 May.
Article En | MEDLINE | ID: mdl-38640631

PURPOSE: Large scintillation crystals-based gamma cameras play a crucial role in nuclear medicine imaging. In this study, a large field-of-view (FOV) gamma detector consisting of 48 square PMTs developed using a new readout electronics, reducing 48 (6 × 8) analog signals to 14 (6 + 8) analog sums of each row and column, with reduced complexity and cost while preserving image quality. METHODS: All 14 analog signals were converted to digital signals using AD9257 high-speed analog to digital (ADC) converters driven by the SPARTAN-6 family of field-programmable gate arrays (FPGA) in order to calculate the signal integrals. The positioning algorithm was based on the digital correlated signal enhancement (CSE) algorithm implemented in the acquisition software. The performance characteristics of the developed gamma camera were measured using the NEMA NU 1-2018 standards. RESULTS: The measured energy resolution of the developed detector was 8.7 % at 140 keV, with an intrinsic spatial resolution of 3.9 mm. The uniformity was within 0.6 %, while the linearity was within 0.1 %. CONCLUSION: The performance evaluation demonstrated that the developed detector has suitable specifications for high-end nuclear medicine imaging.


Gamma Cameras , Electronics/instrumentation , Equipment Design , Algorithms , Image Processing, Computer-Assisted , Costs and Cost Analysis
8.
Phys Eng Sci Med ; 2024 Mar 21.
Article En | MEDLINE | ID: mdl-38512435

Manual segmentation poses a time-consuming challenge for disease quantification, therapy evaluation, treatment planning, and outcome prediction. Convolutional neural networks (CNNs) hold promise in accurately identifying tumor locations and boundaries in PET scans. However, a major hurdle is the extensive amount of supervised and annotated data necessary for training. To overcome this limitation, this study explores semi-supervised approaches utilizing unlabeled data, specifically focusing on PET images of diffuse large B-cell lymphoma (DLBCL) and primary mediastinal large B-cell lymphoma (PMBCL) obtained from two centers. We considered 2-[18F]FDG PET images of 292 patients PMBCL (n = 104) and DLBCL (n = 188) (n = 232 for training and validation, and n = 60 for external testing). We harnessed classical wisdom embedded in traditional segmentation methods, such as the fuzzy clustering loss function (FCM), to tailor the training strategy for a 3D U-Net model, incorporating both supervised and unsupervised learning approaches. Various supervision levels were explored, including fully supervised methods with labeled FCM and unified focal/Dice loss, unsupervised methods with robust FCM (RFCM) and Mumford-Shah (MS) loss, and semi-supervised methods combining FCM with supervised Dice loss (MS + Dice) or labeled FCM (RFCM + FCM). The unified loss function yielded higher Dice scores (0.73 ± 0.11; 95% CI 0.67-0.8) than Dice loss (p value < 0.01). Among the semi-supervised approaches, RFCM + αFCM (α = 0.3) showed the best performance, with Dice score of 0.68 ± 0.10 (95% CI 0.45-0.77), outperforming MS + αDice for any supervision level (any α) (p < 0.01). Another semi-supervised approach with MS + αDice (α = 0.2) achieved Dice score of 0.59 ± 0.09 (95% CI 0.44-0.76) surpassing other supervision levels (p < 0.01). Given the time-consuming nature of manual delineations and the inconsistencies they may introduce, semi-supervised approaches hold promise for automating medical imaging segmentation workflows.

9.
Med Biol Eng Comput ; 2024 Mar 27.
Article En | MEDLINE | ID: mdl-38536580

This study investigated the impact of ComBat harmonization on the reproducibility of radiomic features extracted from magnetic resonance images (MRI) acquired on different scanners, using various data acquisition parameters and multiple image pre-processing techniques using a dedicated MRI phantom. Four scanners were used to acquire an MRI of a nonanatomic phantom as part of the TCIA RIDER database. In fast spin-echo inversion recovery (IR) sequences, several inversion durations were employed, including 50, 100, 250, 500, 750, 1000, 1500, 2000, 2500, and 3000 ms. In addition, a 3D fast spoiled gradient recalled echo (FSPGR) sequence was used to investigate several flip angles (FA): 2, 5, 10, 15, 20, 25, and 30 degrees. Nineteen phantom compartments were manually segmented. Different approaches were used to pre-process each image: Bin discretization, Wavelet filter, Laplacian of Gaussian, logarithm, square, square root, and gradient. Overall, 92 first-, second-, and higher-order statistical radiomic features were extracted. ComBat harmonization was also applied to the extracted radiomic features. Finally, the Intraclass Correlation Coefficient (ICC) and Kruskal-Wallis's (KW) tests were implemented to assess the robustness of radiomic features. The number of non-significant features in the KW test ranged between 0-5 and 29-74 for various scanners, 31-91 and 37-92 for three times tests, 0-33 to 34-90 for FAs, and 3-68 to 65-89 for IRs before and after ComBat harmonization, with different image pre-processing techniques, respectively. The number of features with ICC over 90% ranged between 0-8 and 6-60 for various scanners, 11-75 and 17-80 for three times tests, 3-83 to 9-84 for FAs, and 3-49 to 3-63 for IRs before and after ComBat harmonization, with different image pre-processing techniques, respectively. The use of various scanners, IRs, and FAs has a great impact on radiomic features. However, the majority of scanner-robust features is also robust to IR and FA. Among the effective parameters in MR images, several tests in one scanner have a negligible impact on radiomic features. Different scanners and acquisition parameters using various image pre-processing might affect radiomic features to a large extent. ComBat harmonization might significantly impact the reproducibility of MRI radiomic features.

10.
Z Med Phys ; 2024 Jan 31.
Article En | MEDLINE | ID: mdl-38302292

In positron emission tomography (PET), attenuation and scatter corrections are necessary steps toward accurate quantitative reconstruction of the radiopharmaceutical distribution. Inspired by recent advances in deep learning, many algorithms based on convolutional neural networks have been proposed for automatic attenuation and scatter correction, enabling applications to CT-less or MR-less PET scanners to improve performance in the presence of CT-related artifacts. A known characteristic of PET imaging is to have varying tracer uptakes for various patients and/or anatomical regions. However, existing deep learning-based algorithms utilize a fixed model across different subjects and/or anatomical regions during inference, which could result in spurious outputs. In this work, we present a novel deep learning-based framework for the direct reconstruction of attenuation and scatter-corrected PET from non-attenuation-corrected images in the absence of structural information in the inference. To deal with inter-subject and intra-subject uptake variations in PET imaging, we propose a novel model to perform subject- and region-specific filtering through modulating the convolution kernels in accordance to the contextual coherency within the neighboring slices. This way, the context-aware convolution can guide the composition of intermediate features in favor of regressing input-conditioned and/or region-specific tracer uptakes. We also utilized a large cohort of 910 whole-body studies for training and evaluation purposes, which is more than one order of magnitude larger than previous works. In our experimental studies, qualitative assessments showed that our proposed CT-free method is capable of producing corrected PET images that accurately resemble ground truth images corrected with the aid of CT scans. For quantitative assessments, we evaluated our proposed method over 112 held-out subjects and achieved an absolute relative error of 14.30±3.88% and a relative error of -2.11%±2.73% in whole-body.

11.
Med Phys ; 2024 Feb 09.
Article En | MEDLINE | ID: mdl-38335175

BACKGROUND: Notwithstanding the encouraging results of previous studies reporting on the efficiency of deep learning (DL) in COVID-19 prognostication, clinical adoption of the developed methodology still needs to be improved. To overcome this limitation, we set out to predict the prognosis of a large multi-institutional cohort of patients with COVID-19 using a DL-based model. PURPOSE: This study aimed to evaluate the performance of deep privacy-preserving federated learning (DPFL) in predicting COVID-19 outcomes using chest CT images. METHODS: After applying inclusion and exclusion criteria, 3055 patients from 19 centers, including 1599 alive and 1456 deceased, were enrolled in this study. Data from all centers were split (randomly with stratification respective to each center and class) into a training/validation set (70%/10%) and a hold-out test set (20%). For the DL model, feature extraction was performed on 2D slices, and averaging was performed at the final layer to construct a 3D model for each scan. The DensNet model was used for feature extraction. The model was developed using centralized and FL approaches. For FL, we employed DPFL approaches. Membership inference attack was also evaluated in the FL strategy. For model evaluation, different metrics were reported in the hold-out test sets. In addition, models trained in two scenarios, centralized and FL, were compared using the DeLong test for statistical differences. RESULTS: The centralized model achieved an accuracy of 0.76, while the DPFL model had an accuracy of 0.75. Both the centralized and DPFL models achieved a specificity of 0.77. The centralized model achieved a sensitivity of 0.74, while the DPFL model had a sensitivity of 0.73. A mean AUC of 0.82 and 0.81 with 95% confidence intervals of (95% CI: 0.79-0.85) and (95% CI: 0.77-0.84) were achieved by the centralized model and the DPFL model, respectively. The DeLong test did not prove statistically significant differences between the two models (p-value = 0.98). The AUC values for the inference attacks fluctuate between 0.49 and 0.51, with an average of 0.50 ± 0.003 and 95% CI for the mean AUC of 0.500 to 0.501. CONCLUSION: The performance of the proposed model was comparable to centralized models while operating on large and heterogeneous multi-institutional datasets. In addition, the model was resistant to inference attacks, ensuring the privacy of shared data during the training process.

12.
Radiology ; 310(2): e231319, 2024 Feb.
Article En | MEDLINE | ID: mdl-38319168

Filters are commonly used to enhance specific structures and patterns in images, such as vessels or peritumoral regions, to enable clinical insights beyond the visible image using radiomics. However, their lack of standardization restricts reproducibility and clinical translation of radiomics decision support tools. In this special report, teams of researchers who developed radiomics software participated in a three-phase study (September 2020 to December 2022) to establish a standardized set of filters. The first two phases focused on finding reference filtered images and reference feature values for commonly used convolutional filters: mean, Laplacian of Gaussian, Laws and Gabor kernels, separable and nonseparable wavelets (including decomposed forms), and Riesz transformations. In the first phase, 15 teams used digital phantoms to establish 33 reference filtered images of 36 filter configurations. In phase 2, 11 teams used a chest CT image to derive reference values for 323 of 396 features computed from filtered images using 22 filter and image processing configurations. Reference filtered images and feature values for Riesz transformations were not established. Reproducibility of standardized convolutional filters was validated on a public data set of multimodal imaging (CT, fluorodeoxyglucose PET, and T1-weighted MRI) in 51 patients with soft-tissue sarcoma. At validation, reproducibility of 486 features computed from filtered images using nine configurations × three imaging modalities was assessed using the lower bounds of 95% CIs of intraclass correlation coefficients. Out of 486 features, 458 were found to be reproducible across nine teams with lower bounds of 95% CIs of intraclass correlation coefficients greater than 0.75. In conclusion, eight filter types were standardized with reference filtered images and reference feature values for verifying and calibrating radiomics software packages. A web-based tool is available for compliance checking.


Image Processing, Computer-Assisted , Radiomics , Humans , Reproducibility of Results , Biomarkers , Multimodal Imaging
13.
Eur J Nucl Med Mol Imaging ; 51(7): 1937-1954, 2024 Jun.
Article En | MEDLINE | ID: mdl-38326655

PURPOSE: Total metabolic tumor volume (TMTV) segmentation has significant value enabling quantitative imaging biomarkers for lymphoma management. In this work, we tackle the challenging task of automated tumor delineation in lymphoma from PET/CT scans using a cascaded approach. METHODS: Our study included 1418 2-[18F]FDG PET/CT scans from four different centers. The dataset was divided into 900 scans for development/validation/testing phases and 518 for multi-center external testing. The former consisted of 450 lymphoma, lung cancer, and melanoma scans, along with 450 negative scans, while the latter consisted of lymphoma patients from different centers with diffuse large B cell, primary mediastinal large B cell, and classic Hodgkin lymphoma cases. Our approach involves resampling PET/CT images into different voxel sizes in the first step, followed by training multi-resolution 3D U-Nets on each resampled dataset using a fivefold cross-validation scheme. The models trained on different data splits were ensemble. After applying soft voting to the predicted masks, in the second step, we input the probability-averaged predictions, along with the input imaging data, into another 3D U-Net. Models were trained with semi-supervised loss. We additionally considered the effectiveness of using test time augmentation (TTA) to improve the segmentation performance after training. In addition to quantitative analysis including Dice score (DSC) and TMTV comparisons, the qualitative evaluation was also conducted by nuclear medicine physicians. RESULTS: Our cascaded soft-voting guided approach resulted in performance with an average DSC of 0.68 ± 0.12 for the internal test data from developmental dataset, and an average DSC of 0.66 ± 0.18 on the multi-site external data (n = 518), significantly outperforming (p < 0.001) state-of-the-art (SOTA) approaches including nnU-Net and SWIN UNETR. While TTA yielded enhanced performance gains for some of the comparator methods, its impact on our cascaded approach was found to be negligible (DSC: 0.66 ± 0.16). Our approach reliably quantified TMTV, with a correlation of 0.89 with the ground truth (p < 0.001). Furthermore, in terms of visual assessment, concordance between quantitative evaluations and clinician feedback was observed in the majority of cases. The average relative error (ARE) and the absolute error (AE) in TMTV prediction on external multi-centric dataset were ARE = 0.43 ± 0.54 and AE = 157.32 ± 378.12 (mL) for all the external test data (n = 518), and ARE = 0.30 ± 0.22 and AE = 82.05 ± 99.78 (mL) when the 10% outliers (n = 53) were excluded. CONCLUSION: TMTV-Net demonstrates strong performance and generalizability in TMTV segmentation across multi-site external datasets, encompassing various lymphoma subtypes. A negligible reduction of 2% in overall performance during testing on external data highlights robust model generalizability across different centers and cancer types, likely attributable to its training with resampled inputs. Our model is publicly available, allowing easy multi-site evaluation and generalizability analysis on datasets from different institutions.


Image Processing, Computer-Assisted , Lymphoma , Positron Emission Tomography Computed Tomography , Tumor Burden , Humans , Positron Emission Tomography Computed Tomography/methods , Lymphoma/diagnostic imaging , Image Processing, Computer-Assisted/methods , Fluorodeoxyglucose F18 , Automation , Male , Female
14.
Radiat Oncol ; 19(1): 12, 2024 Jan 22.
Article En | MEDLINE | ID: mdl-38254203

BACKGROUND: This study aimed to investigate the value of clinical, radiomic features extracted from gross tumor volumes (GTVs) delineated on CT images, dose distributions (Dosiomics), and fusion of CT and dose distributions to predict outcomes in head and neck cancer (HNC) patients. METHODS: A cohort of 240 HNC patients from five different centers was obtained from The Cancer Imaging Archive. Seven strategies, including four non-fusion (Clinical, CT, Dose, DualCT-Dose), and three fusion algorithms (latent low-rank representation referred (LLRR),Wavelet, weighted least square (WLS)) were applied. The fusion algorithms were used to fuse the pre-treatment CT images and 3-dimensional dose maps. Overall, 215 radiomics and Dosiomics features were extracted from the GTVs, alongside with seven clinical features incorporated. Five feature selection (FS) methods in combination with six machine learning (ML) models were implemented. The performance of the models was quantified using the concordance index (CI) in one-center-leave-out 5-fold cross-validation for overall survival (OS) prediction considering the time-to-event. RESULTS: The mean CI and Kaplan-Meier curves were used for further comparisons. The CoxBoost ML model using the Minimal Depth (MD) FS method and the glmnet model using the Variable hunting (VH) FS method showed the best performance with CI = 0.73 ± 0.15 for features extracted from LLRR fused images. In addition, both glmnet-Cindex and Coxph-Cindex classifiers achieved a CI of 0.72 ± 0.14 by employing the dose images (+ incorporated clinical features) only. CONCLUSION: Our results demonstrated that clinical features, Dosiomics and fusion of dose and CT images by specific ML-FS models could predict the overall survival of HNC patients with acceptable accuracy. Besides, the performance of ML methods among the three different strategies was almost comparable.


Head and Neck Neoplasms , Radiomics , Humans , Prognosis , Head and Neck Neoplasms/diagnostic imaging , Head and Neck Neoplasms/radiotherapy , Machine Learning , Tomography, X-Ray Computed
15.
J Biomed Inform ; 150: 104583, 2024 02.
Article En | MEDLINE | ID: mdl-38191010

OBJECTIVE: The primary objective of our study is to address the challenge of confidentially sharing medical images across different centers. This is often a critical necessity in both clinical and research environments, yet restrictions typically exist due to privacy concerns. Our aim is to design a privacy-preserving data-sharing mechanism that allows medical images to be stored as encoded and obfuscated representations in the public domain without revealing any useful or recoverable content from the images. In tandem, we aim to provide authorized users with compact private keys that could be used to reconstruct the corresponding images. METHOD: Our approach involves utilizing a neural auto-encoder. The convolutional filter outputs are passed through sparsifying transformations to produce multiple compact codes. Each code is responsible for reconstructing different attributes of the image. The key privacy-preserving element in this process is obfuscation through the use of specific pseudo-random noise. When applied to the codes, it becomes computationally infeasible for an attacker to guess the correct representation for all the codes, thereby preserving the privacy of the images. RESULTS: The proposed framework was implemented and evaluated using chest X-ray images for different medical image analysis tasks, including classification, segmentation, and texture analysis. Additionally, we thoroughly assessed the robustness of our method against various attacks using both supervised and unsupervised algorithms. CONCLUSION: This study provides a novel, optimized, and privacy-assured data-sharing mechanism for medical images, enabling multi-party sharing in a secure manner. While we have demonstrated its effectiveness with chest X-ray images, the mechanism can be utilized in other medical images modalities as well.


Algorithms , Privacy , Information Dissemination
16.
Med Phys ; 51(2): 870-880, 2024 Feb.
Article En | MEDLINE | ID: mdl-38197492

BACKGROUND: Attenuation and scatter correction is crucial for quantitative positron emission tomography (PET) imaging. Direct attenuation correction (AC) in the image domain using deep learning approaches has been recently proposed for combined PET/MR and standalone PET modalities lacking transmission scanning devices or anatomical imaging. PURPOSE: In this study, different input settings were considered in the model training to investigate deep learning-based AC in the image space. METHODS: Three different deep learning methods were developed for direct AC in the image space: (i) use of non-attenuation-corrected PET images as input (NonAC-PET), (ii) use of attenuation-corrected PET images with a simple two-class AC map (composed of soft-tissue and background air) obtained from NonAC-PET images (PET segmentation-based AC [SegAC-PET]), and (iii) use of both NonAC-PET and SegAC-PET images in a Double-Channel fashion to predict ground truth attenuation corrected PET images with Computed Tomography images (CTAC-PET). Since a simple two-class AC map (generated from NonAC-PET images) can easily be generated, this work assessed the added value of incorporating SegAC-PET images into direct AC in the image space. A 4-fold cross-validation scheme was adopted to train and evaluate the different models based using 80 brain 18 F-Fluorodeoxyglucose PET/CT images. The voxel-wise and region-wise accuracy of the models were examined via measuring the standardized uptake value (SUV) quantification bias in different regions of the brain. RESULTS: The overall root mean square error (RMSE) for the Double-Channel setting was 0.157 ± 0.08 SUV in the whole brain region, while RMSEs of 0.214 ± 0.07 and 0.189 ± 0.14 SUV were observed in NonAC-PET and SegAC-PET models, respectively. A mean SUV bias of 0.01 ± 0.26% was achieved by the Double-Channel model regarding the activity concentration in cerebellum region, as opposed to 0.08 ± 0.28% and 0.05 ± 0.28% SUV biases for the network that uniquely used NonAC-PET or SegAC-PET as input, respectively. SegAC-PET images with an SUV bias of -1.15 ± 0.54%, served as a benchmark for clinically accepted errors. In general, the Double-Channel network, relying on both SegAC-PET and NonAC-PET images, outperformed the other AC models. CONCLUSION: Since the generation of two-class AC maps from non-AC PET images is straightforward, the current study investigated the potential added value of incorporating SegAC-PET images into a deep learning-based direct AC approach. Altogether, compared with models that use only NonAC-PET and SegAC-PET images, the Double-Channel deep learning network exhibited superior attenuation correction accuracy.


Deep Learning , Positron Emission Tomography Computed Tomography , Fluorodeoxyglucose F18 , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Positron-Emission Tomography/methods , Brain/diagnostic imaging
17.
Eur J Nucl Med Mol Imaging ; 51(6): 1516-1529, 2024 May.
Article En | MEDLINE | ID: mdl-38267686

PURPOSE: Accurate dosimetry is critical for ensuring the safety and efficacy of radiopharmaceutical therapies. In current clinical dosimetry practice, MIRD formalisms are widely employed. However, with the rapid advancement of deep learning (DL) algorithms, there has been an increasing interest in leveraging the calculation speed and automation capabilities for different tasks. We aimed to develop a hybrid transformer-based deep learning (DL) model that incorporates a multiple voxel S-value (MSV) approach for voxel-level dosimetry in [177Lu]Lu-DOTATATE therapy. The goal was to enhance the performance of the model to achieve accuracy levels closely aligned with Monte Carlo (MC) simulations, considered as the standard of reference. We extended our analysis to include MIRD formalisms (SSV and MSV), thereby conducting a comprehensive dosimetry study. METHODS: We used a dataset consisting of 22 patients undergoing up to 4 cycles of [177Lu]Lu-DOTATATE therapy. MC simulations were used to generate reference absorbed dose maps. In addition, MIRD formalism approaches, namely, single S-value (SSV) and MSV techniques, were performed. A UNEt TRansformer (UNETR) DL architecture was trained using five-fold cross-validation to generate MC-based dose maps. Co-registered CT images were fed into the network as input, whereas the difference between MC and MSV (MC-MSV) was set as output. DL results are then integrated to MSV to revive the MC dose maps. Finally, the dose maps generated by MSV, SSV, and DL were quantitatively compared to the MC reference at both voxel level and organ level (organs at risk and lesions). RESULTS: The DL approach showed slightly better performance (voxel relative absolute error (RAE) = 5.28 ± 1.32) compared to MSV (voxel RAE = 5.54 ± 1.4) and outperformed SSV (voxel RAE = 7.8 ± 3.02). Gamma analysis pass rates were 99.0 ± 1.2%, 98.8 ± 1.3%, and 98.7 ± 1.52% for DL, MSV, and SSV approaches, respectively. The computational time for MC was the highest (~2 days for a single-bed SPECT study) compared to MSV, SSV, and DL, whereas the DL-based approach outperformed the other approaches in terms of time efficiency (3 s for a single-bed SPECT). Organ-wise analysis showed absolute percent errors of 1.44 ± 3.05%, 1.18 ± 2.65%, and 1.15 ± 2.5% for SSV, MSV, and DL approaches, respectively, in lesion-absorbed doses. CONCLUSION: A hybrid transformer-based deep learning model was developed for fast and accurate dose map generation, outperforming the MIRD approaches, specifically in heterogenous regions. The model achieved accuracy close to MC gold standard and has potential for clinical implementation for use on large-scale datasets.


Octreotide , Octreotide/analogs & derivatives , Organometallic Compounds , Radiometry , Radiopharmaceuticals , Single Photon Emission Computed Tomography Computed Tomography , Humans , Octreotide/therapeutic use , Organometallic Compounds/therapeutic use , Single Photon Emission Computed Tomography Computed Tomography/methods , Radiometry/methods , Radiopharmaceuticals/therapeutic use , Precision Medicine/methods , Deep Learning , Male , Female , Monte Carlo Method , Image Processing, Computer-Assisted/methods , Neuroendocrine Tumors/radiotherapy , Neuroendocrine Tumors/diagnostic imaging
18.
Ann Nucl Med ; 38(1): 31-70, 2024 Jan.
Article En | MEDLINE | ID: mdl-37952197

We focus on reviewing state-of-the-art developments of dedicated PET scanners with irregular geometries and the potential of different aspects of multifunctional PET imaging. First, we discuss advances in non-conventional PET detector geometries. Then, we present innovative designs of organ-specific dedicated PET scanners for breast, brain, prostate, and cardiac imaging. We will also review challenges and possible artifacts by image reconstruction algorithms for PET scanners with irregular geometries, such as non-cylindrical and partial angular coverage geometries and how they can be addressed. Then, we attempt to address some open issues about cost/benefits analysis of dedicated PET scanners, how far are the theoretical conceptual designs from the market/clinic, and strategies to reduce fabrication cost without compromising performance.


Image Processing, Computer-Assisted , Positron-Emission Tomography , Humans , Phantoms, Imaging , Positron-Emission Tomography/methods , Image Processing, Computer-Assisted/methods , Brain , Algorithms
19.
Med Phys ; 51(1): 319-333, 2024 Jan.
Article En | MEDLINE | ID: mdl-37475591

BACKGROUND: PET/CT images combining anatomic and metabolic data provide complementary information that can improve clinical task performance. PET image segmentation algorithms exploiting the multi-modal information available are still lacking. PURPOSE: Our study aimed to assess the performance of PET and CT image fusion for gross tumor volume (GTV) segmentations of head and neck cancers (HNCs) utilizing conventional, deep learning (DL), and output-level voting-based fusions. METHODS: The current study is based on a total of 328 histologically confirmed HNCs from six different centers. The images were automatically cropped to a 200 × 200 head and neck region box, and CT and PET images were normalized for further processing. Eighteen conventional image-level fusions were implemented. In addition, a modified U2-Net architecture as DL fusion model baseline was used. Three different input, layer, and decision-level information fusions were used. Simultaneous truth and performance level estimation (STAPLE) and majority voting to merge different segmentation outputs (from PET and image-level and network-level fusions), that is, output-level information fusion (voting-based fusions) were employed. Different networks were trained in a 2D manner with a batch size of 64. Twenty percent of the dataset with stratification concerning the centers (20% in each center) were used for final result reporting. Different standard segmentation metrics and conventional PET metrics, such as SUV, were calculated. RESULTS: In single modalities, PET had a reasonable performance with a Dice score of 0.77 ± 0.09, while CT did not perform acceptably and reached a Dice score of only 0.38 ± 0.22. Conventional fusion algorithms obtained a Dice score range of [0.76-0.81] with guided-filter-based context enhancement (GFCE) at the low-end, and anisotropic diffusion and Karhunen-Loeve transform fusion (ADF), multi-resolution singular value decomposition (MSVD), and multi-level image decomposition based on latent low-rank representation (MDLatLRR) at the high-end. All DL fusion models achieved Dice scores of 0.80. Output-level voting-based models outperformed all other models, achieving superior results with a Dice score of 0.84 for Majority_ImgFus, Majority_All, and Majority_Fast. A mean error of almost zero was achieved for all fusions using SUVpeak , SUVmean and SUVmedian . CONCLUSION: PET/CT information fusion adds significant value to segmentation tasks, considerably outperforming PET-only and CT-only methods. In addition, both conventional image-level and DL fusions achieve competitive results. Meanwhile, output-level voting-based fusion using majority voting of several algorithms results in statistically significant improvements in the segmentation of HNC.


Head and Neck Neoplasms , Positron Emission Tomography Computed Tomography , Humans , Positron Emission Tomography Computed Tomography/methods , Algorithms , Head and Neck Neoplasms/diagnostic imaging , Image Processing, Computer-Assisted/methods
20.
Eur J Nucl Med Mol Imaging ; 51(3): 734-748, 2024 Feb.
Article En | MEDLINE | ID: mdl-37897616

PURPOSE: To investigate the impact of reduced injected doses on the quantitative and qualitative assessment of the amyloid PET tracers [18F]flutemetamol and [18F]florbetaben. METHODS: Cognitively impaired and unimpaired individuals (N = 250, 36% Aß-positive) were included and injected with [18F]flutemetamol (N = 175) or [18F]florbetaben (N = 75). PET scans were acquired in list-mode (90-110 min post-injection) and reduced-dose images were simulated to generate images of 75, 50, 25, 12.5 and 5% of the original injected dose. Images were reconstructed using vendor-provided reconstruction tools and visually assessed for Aß-pathology. SUVRs were calculated for a global cortical and three smaller regions using a cerebellar cortex reference tissue, and Centiloid was computed. Absolute and percentage differences in SUVR and CL were calculated between dose levels, and the ability to discriminate between Aß- and Aß + scans was evaluated using ROC analyses. Finally, intra-reader agreement between the reduced dose and 100% images was evaluated. RESULTS: At 5% injected dose, change in SUVR was 3.72% and 3.12%, with absolute change in Centiloid 3.35CL and 4.62CL, for [18F]flutemetamol and [18F]florbetaben, respectively. At 12.5% injected dose, percentage change in SUVR and absolute change in Centiloid were < 1.5%. AUCs for discriminating Aß- from Aß + scans were high (AUC ≥ 0.94) across dose levels, and visual assessment showed intra-reader agreement of > 80% for both tracers. CONCLUSION: This proof-of-concept study showed that for both [18F]flutemetamol and [18F]florbetaben, adequate quantitative and qualitative assessments can be obtained at 12.5% of the original injected dose. However, decisions to reduce the injected dose should be made considering the specific clinical or research circumstances.


Alzheimer Disease , Aniline Compounds , Stilbenes , Humans , Benzothiazoles , Amyloid/metabolism , Positron-Emission Tomography/methods , Alzheimer Disease/diagnostic imaging , Amyloid beta-Peptides/metabolism , Brain/metabolism
...