Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 34
Filtrar
1.
Med Phys ; 51(6): 4095-4104, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38629779

RESUMEN

BACKGROUND: Contrast-enhanced computed tomography (CECT) provides much more information compared to non-enhanced CT images, especially for the differentiation of malignancies, such as liver carcinomas. Contrast media injection phase information is usually missing on public datasets and not standardized in the clinic even in the same region and language. This is a barrier to effective use of available CECT images in clinical research. PURPOSE: The aim of this study is to detect contrast media injection phase from CT images by means of organ segmentation and machine learning algorithms. METHODS: A total number of 2509 CT images split into four subsets of non-contrast (class #0), arterial (class #1), venous (class #2), and delayed (class #3) after contrast media injection were collected from two CT scanners. Seven organs including the liver, spleen, heart, kidneys, lungs, urinary bladder, and aorta along with body contour masks were generated by pre-trained deep learning algorithms. Subsequently, five first-order statistical features including average, standard deviation, 10, 50, and 90 percentiles extracted from the above-mentioned masks were fed to machine learning models after feature selection and reduction to classify the CT images in one of four above mentioned classes. A 10-fold data split strategy was followed. The performance of our methodology was evaluated in terms of classification accuracy metrics. RESULTS: The best performance was achieved by Boruta feature selection and RF model with average area under the curve of more than 0.999 and accuracy of 0.9936 averaged over four classes and 10 folds. Boruta feature selection selected all predictor features. The lowest classification was observed for class #2 (0.9888), which is already an excellent result. In the 10-fold strategy, only 33 cases from 2509 cases (∼1.4%) were misclassified. The performance over all folds was consistent. CONCLUSIONS: We developed a fast, accurate, reliable, and explainable methodology to classify contrast media phases which may be useful in data curation and annotation in big online datasets or local datasets with non-standard or no series description. Our model containing two steps of deep learning and machine learning may help to exploit available datasets more effectively.


Asunto(s)
Automatización , Medios de Contraste , Procesamiento de Imagen Asistido por Computador , Aprendizaje Automático , Tomografía Computarizada por Rayos X , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Radiografía Abdominal , Abdomen/diagnóstico por imagen
2.
J Med Radiat Sci ; 2024 Mar 07.
Artículo en Inglés | MEDLINE | ID: mdl-38454637

RESUMEN

INTRODUCTION: Concerns regarding the adverse consequences of radiation have increased due to the expanded application of computed tomography (CT) in medical practice. Certain studies have indicated that the radiation dosage depends on the anatomical region, the imaging technique employed and patient-specific variables. The aim of this study is to present fitting models for the estimation of age-specific dose estimates (ASDE), in the same direction of size-specific dose estimates, and effective doses based on patient age, gender and the type of CT examination used in paediatric head, chest and abdomen-pelvis imaging. METHODS: A total of 583 paediatric patients were included in the study. Radiometric data were gathered from DICOM files. The patients were categorised into five distinct groups (under 15 years of age), and the effective dose, organ dose and ASDE were computed for the CT examinations involving the head, chest and abdomen-pelvis. Finally, the best fitting models were presented for estimation of ASDE and effective doses based on patient age, gender and the type of examination. RESULTS: The ASDE in head, chest, and abdomen-pelvis CT examinations increases with increasing age. As age increases, the effective dose in head and abdomen-pelvis CT scans decreased. However, for chest scans, the effective dose initially showed a decreasing trend until the first year of life; after that, it increases in correlation with age. CONCLUSIONS: Based on the presented fitting model for the ASDE, these CT scan quantities depend on factors such as patient age and the type of CT examination. For the effective dose, the gender was also included in the fitting model. By utilising the information about the scan type, region and age, it becomes feasible to estimate the ASDE and effective dose using the models provided in this study.

3.
Med Phys ; 2024 Feb 09.
Artículo en Inglés | MEDLINE | ID: mdl-38335175

RESUMEN

BACKGROUND: Notwithstanding the encouraging results of previous studies reporting on the efficiency of deep learning (DL) in COVID-19 prognostication, clinical adoption of the developed methodology still needs to be improved. To overcome this limitation, we set out to predict the prognosis of a large multi-institutional cohort of patients with COVID-19 using a DL-based model. PURPOSE: This study aimed to evaluate the performance of deep privacy-preserving federated learning (DPFL) in predicting COVID-19 outcomes using chest CT images. METHODS: After applying inclusion and exclusion criteria, 3055 patients from 19 centers, including 1599 alive and 1456 deceased, were enrolled in this study. Data from all centers were split (randomly with stratification respective to each center and class) into a training/validation set (70%/10%) and a hold-out test set (20%). For the DL model, feature extraction was performed on 2D slices, and averaging was performed at the final layer to construct a 3D model for each scan. The DensNet model was used for feature extraction. The model was developed using centralized and FL approaches. For FL, we employed DPFL approaches. Membership inference attack was also evaluated in the FL strategy. For model evaluation, different metrics were reported in the hold-out test sets. In addition, models trained in two scenarios, centralized and FL, were compared using the DeLong test for statistical differences. RESULTS: The centralized model achieved an accuracy of 0.76, while the DPFL model had an accuracy of 0.75. Both the centralized and DPFL models achieved a specificity of 0.77. The centralized model achieved a sensitivity of 0.74, while the DPFL model had a sensitivity of 0.73. A mean AUC of 0.82 and 0.81 with 95% confidence intervals of (95% CI: 0.79-0.85) and (95% CI: 0.77-0.84) were achieved by the centralized model and the DPFL model, respectively. The DeLong test did not prove statistically significant differences between the two models (p-value = 0.98). The AUC values for the inference attacks fluctuate between 0.49 and 0.51, with an average of 0.50 ± 0.003 and 95% CI for the mean AUC of 0.500 to 0.501. CONCLUSION: The performance of the proposed model was comparable to centralized models while operating on large and heterogeneous multi-institutional datasets. In addition, the model was resistant to inference attacks, ensuring the privacy of shared data during the training process.

4.
J Biomed Inform ; 150: 104583, 2024 02.
Artículo en Inglés | MEDLINE | ID: mdl-38191010

RESUMEN

OBJECTIVE: The primary objective of our study is to address the challenge of confidentially sharing medical images across different centers. This is often a critical necessity in both clinical and research environments, yet restrictions typically exist due to privacy concerns. Our aim is to design a privacy-preserving data-sharing mechanism that allows medical images to be stored as encoded and obfuscated representations in the public domain without revealing any useful or recoverable content from the images. In tandem, we aim to provide authorized users with compact private keys that could be used to reconstruct the corresponding images. METHOD: Our approach involves utilizing a neural auto-encoder. The convolutional filter outputs are passed through sparsifying transformations to produce multiple compact codes. Each code is responsible for reconstructing different attributes of the image. The key privacy-preserving element in this process is obfuscation through the use of specific pseudo-random noise. When applied to the codes, it becomes computationally infeasible for an attacker to guess the correct representation for all the codes, thereby preserving the privacy of the images. RESULTS: The proposed framework was implemented and evaluated using chest X-ray images for different medical image analysis tasks, including classification, segmentation, and texture analysis. Additionally, we thoroughly assessed the robustness of our method against various attacks using both supervised and unsupervised algorithms. CONCLUSION: This study provides a novel, optimized, and privacy-assured data-sharing mechanism for medical images, enabling multi-party sharing in a secure manner. While we have demonstrated its effectiveness with chest X-ray images, the mechanism can be utilized in other medical images modalities as well.


Asunto(s)
Algoritmos , Privacidad , Difusión de la Información
5.
Eur J Nucl Med Mol Imaging ; 51(6): 1516-1529, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38267686

RESUMEN

PURPOSE: Accurate dosimetry is critical for ensuring the safety and efficacy of radiopharmaceutical therapies. In current clinical dosimetry practice, MIRD formalisms are widely employed. However, with the rapid advancement of deep learning (DL) algorithms, there has been an increasing interest in leveraging the calculation speed and automation capabilities for different tasks. We aimed to develop a hybrid transformer-based deep learning (DL) model that incorporates a multiple voxel S-value (MSV) approach for voxel-level dosimetry in [177Lu]Lu-DOTATATE therapy. The goal was to enhance the performance of the model to achieve accuracy levels closely aligned with Monte Carlo (MC) simulations, considered as the standard of reference. We extended our analysis to include MIRD formalisms (SSV and MSV), thereby conducting a comprehensive dosimetry study. METHODS: We used a dataset consisting of 22 patients undergoing up to 4 cycles of [177Lu]Lu-DOTATATE therapy. MC simulations were used to generate reference absorbed dose maps. In addition, MIRD formalism approaches, namely, single S-value (SSV) and MSV techniques, were performed. A UNEt TRansformer (UNETR) DL architecture was trained using five-fold cross-validation to generate MC-based dose maps. Co-registered CT images were fed into the network as input, whereas the difference between MC and MSV (MC-MSV) was set as output. DL results are then integrated to MSV to revive the MC dose maps. Finally, the dose maps generated by MSV, SSV, and DL were quantitatively compared to the MC reference at both voxel level and organ level (organs at risk and lesions). RESULTS: The DL approach showed slightly better performance (voxel relative absolute error (RAE) = 5.28 ± 1.32) compared to MSV (voxel RAE = 5.54 ± 1.4) and outperformed SSV (voxel RAE = 7.8 ± 3.02). Gamma analysis pass rates were 99.0 ± 1.2%, 98.8 ± 1.3%, and 98.7 ± 1.52% for DL, MSV, and SSV approaches, respectively. The computational time for MC was the highest (~2 days for a single-bed SPECT study) compared to MSV, SSV, and DL, whereas the DL-based approach outperformed the other approaches in terms of time efficiency (3 s for a single-bed SPECT). Organ-wise analysis showed absolute percent errors of 1.44 ± 3.05%, 1.18 ± 2.65%, and 1.15 ± 2.5% for SSV, MSV, and DL approaches, respectively, in lesion-absorbed doses. CONCLUSION: A hybrid transformer-based deep learning model was developed for fast and accurate dose map generation, outperforming the MIRD approaches, specifically in heterogenous regions. The model achieved accuracy close to MC gold standard and has potential for clinical implementation for use on large-scale datasets.


Asunto(s)
Octreótido , Octreótido/análogos & derivados , Compuestos Organometálicos , Radiometría , Radiofármacos , Tomografía Computarizada por Tomografía Computarizada de Emisión de Fotón Único , Humanos , Octreótido/uso terapéutico , Compuestos Organometálicos/uso terapéutico , Tomografía Computarizada por Tomografía Computarizada de Emisión de Fotón Único/métodos , Radiometría/métodos , Radiofármacos/uso terapéutico , Medicina de Precisión/métodos , Aprendizaje Profundo , Masculino , Femenino , Método de Montecarlo , Procesamiento de Imagen Asistido por Computador/métodos , Tumores Neuroendocrinos/radioterapia , Tumores Neuroendocrinos/diagnóstico por imagen
6.
Radiat Oncol ; 19(1): 12, 2024 Jan 22.
Artículo en Inglés | MEDLINE | ID: mdl-38254203

RESUMEN

BACKGROUND: This study aimed to investigate the value of clinical, radiomic features extracted from gross tumor volumes (GTVs) delineated on CT images, dose distributions (Dosiomics), and fusion of CT and dose distributions to predict outcomes in head and neck cancer (HNC) patients. METHODS: A cohort of 240 HNC patients from five different centers was obtained from The Cancer Imaging Archive. Seven strategies, including four non-fusion (Clinical, CT, Dose, DualCT-Dose), and three fusion algorithms (latent low-rank representation referred (LLRR),Wavelet, weighted least square (WLS)) were applied. The fusion algorithms were used to fuse the pre-treatment CT images and 3-dimensional dose maps. Overall, 215 radiomics and Dosiomics features were extracted from the GTVs, alongside with seven clinical features incorporated. Five feature selection (FS) methods in combination with six machine learning (ML) models were implemented. The performance of the models was quantified using the concordance index (CI) in one-center-leave-out 5-fold cross-validation for overall survival (OS) prediction considering the time-to-event. RESULTS: The mean CI and Kaplan-Meier curves were used for further comparisons. The CoxBoost ML model using the Minimal Depth (MD) FS method and the glmnet model using the Variable hunting (VH) FS method showed the best performance with CI = 0.73 ± 0.15 for features extracted from LLRR fused images. In addition, both glmnet-Cindex and Coxph-Cindex classifiers achieved a CI of 0.72 ± 0.14 by employing the dose images (+ incorporated clinical features) only. CONCLUSION: Our results demonstrated that clinical features, Dosiomics and fusion of dose and CT images by specific ML-FS models could predict the overall survival of HNC patients with acceptable accuracy. Besides, the performance of ML methods among the three different strategies was almost comparable.


Asunto(s)
Neoplasias de Cabeza y Cuello , Radiómica , Humanos , Pronóstico , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Neoplasias de Cabeza y Cuello/radioterapia , Aprendizaje Automático , Tomografía Computarizada por Rayos X
7.
Med Phys ; 51(1): 319-333, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37475591

RESUMEN

BACKGROUND: PET/CT images combining anatomic and metabolic data provide complementary information that can improve clinical task performance. PET image segmentation algorithms exploiting the multi-modal information available are still lacking. PURPOSE: Our study aimed to assess the performance of PET and CT image fusion for gross tumor volume (GTV) segmentations of head and neck cancers (HNCs) utilizing conventional, deep learning (DL), and output-level voting-based fusions. METHODS: The current study is based on a total of 328 histologically confirmed HNCs from six different centers. The images were automatically cropped to a 200 × 200 head and neck region box, and CT and PET images were normalized for further processing. Eighteen conventional image-level fusions were implemented. In addition, a modified U2-Net architecture as DL fusion model baseline was used. Three different input, layer, and decision-level information fusions were used. Simultaneous truth and performance level estimation (STAPLE) and majority voting to merge different segmentation outputs (from PET and image-level and network-level fusions), that is, output-level information fusion (voting-based fusions) were employed. Different networks were trained in a 2D manner with a batch size of 64. Twenty percent of the dataset with stratification concerning the centers (20% in each center) were used for final result reporting. Different standard segmentation metrics and conventional PET metrics, such as SUV, were calculated. RESULTS: In single modalities, PET had a reasonable performance with a Dice score of 0.77 ± 0.09, while CT did not perform acceptably and reached a Dice score of only 0.38 ± 0.22. Conventional fusion algorithms obtained a Dice score range of [0.76-0.81] with guided-filter-based context enhancement (GFCE) at the low-end, and anisotropic diffusion and Karhunen-Loeve transform fusion (ADF), multi-resolution singular value decomposition (MSVD), and multi-level image decomposition based on latent low-rank representation (MDLatLRR) at the high-end. All DL fusion models achieved Dice scores of 0.80. Output-level voting-based models outperformed all other models, achieving superior results with a Dice score of 0.84 for Majority_ImgFus, Majority_All, and Majority_Fast. A mean error of almost zero was achieved for all fusions using SUVpeak , SUVmean and SUVmedian . CONCLUSION: PET/CT information fusion adds significant value to segmentation tasks, considerably outperforming PET-only and CT-only methods. In addition, both conventional image-level and DL fusions achieve competitive results. Meanwhile, output-level voting-based fusion using majority voting of several algorithms results in statistically significant improvements in the segmentation of HNC.


Asunto(s)
Neoplasias de Cabeza y Cuello , Tomografía Computarizada por Tomografía de Emisión de Positrones , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Algoritmos , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
8.
Clin Nucl Med ; 48(12): 1035-1046, 2023 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-37883015

RESUMEN

PURPOSE: Medical imaging artifacts compromise image quality and quantitative analysis and might confound interpretation and misguide clinical decision-making. The present work envisions and demonstrates a new paradigm PET image Quality Assurance NETwork (PET-QA-NET) in which various image artifacts are detected and disentangled from images without prior knowledge of a standard of reference or ground truth for routine PET image quality assurance. METHODS: The network was trained and evaluated using training/validation/testing data sets consisting of 669/100/100 artifact-free oncological 18 F-FDG PET/CT images and subsequently fine-tuned and evaluated on 384 (20% for fine-tuning) scans from 8 different PET centers. The developed DL model was quantitatively assessed using various image quality metrics calculated for 22 volumes of interest defined on each scan. In addition, 200 additional 18 F-FDG PET/CT scans (this time with artifacts), generated using both CT-based attenuation and scatter correction (routine PET) and PET-QA-NET, were blindly evaluated by 2 nuclear medicine physicians for the presence of artifacts, diagnostic confidence, image quality, and the number of lesions detected in different body regions. RESULTS: Across the volumes of interest of 100 patients, SUV MAE values of 0.13 ± 0.04, 0.24 ± 0.1, and 0.21 ± 0.06 were reached for SUV mean , SUV max , and SUV peak , respectively (no statistically significant difference). Qualitative assessment showed a general trend of improved image quality and diagnostic confidence and reduced image artifacts for PET-QA-NET compared with routine CT-based attenuation and scatter correction. CONCLUSION: We developed a highly effective and reliable quality assurance tool that can be embedded routinely to detect and correct for 18 F-FDG PET image artifacts in clinical setting with notably improved PET image quality and quantitative capabilities.


Asunto(s)
Fluorodesoxiglucosa F18 , Tomografía Computarizada por Tomografía de Emisión de Positrones , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Inteligencia Artificial , Artefactos , Tomografía de Emisión de Positrones/métodos , Procesamiento de Imagen Asistido por Computador/métodos
9.
Eur J Nucl Med Mol Imaging ; 51(1): 40-53, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37682303

RESUMEN

PURPOSE: Image artefacts continue to pose challenges in clinical molecular imaging, resulting in misdiagnoses, additional radiation doses to patients and financial costs. Mismatch and halo artefacts occur frequently in gallium-68 (68Ga)-labelled compounds whole-body PET/CT imaging. Correcting for these artefacts is not straightforward and requires algorithmic developments, given that conventional techniques have failed to address them adequately. In the current study, we employed differential privacy-preserving federated transfer learning (FTL) to manage clinical data sharing and tackle privacy issues for building centre-specific models that detect and correct artefacts present in PET images. METHODS: Altogether, 1413 patients with 68Ga prostate-specific membrane antigen (PSMA)/DOTA-TATE (TOC) PET/CT scans from 3 countries, including 8 different centres, were enrolled in this study. CT-based attenuation and scatter correction (CT-ASC) was used in all centres for quantitative PET reconstruction. Prior to model training, an experienced nuclear medicine physician reviewed all images to ensure the use of high-quality, artefact-free PET images (421 patients' images). A deep neural network (modified U2Net) was trained on 80% of the artefact-free PET images to utilize centre-based (CeBa), centralized (CeZe) and the proposed differential privacy FTL frameworks. Quantitative analysis was performed in 20% of the clean data (with no artefacts) in each centre. A panel of two nuclear medicine physicians conducted qualitative assessment of image quality, diagnostic confidence and image artefacts in 128 patients with artefacts (256 images for CT-ASC and FTL-ASC). RESULTS: The three approaches investigated in this study for 68Ga-PET imaging (CeBa, CeZe and FTL) resulted in a mean absolute error (MAE) of 0.42 ± 0.21 (CI 95%: 0.38 to 0.47), 0.32 ± 0.23 (CI 95%: 0.27 to 0.37) and 0.28 ± 0.15 (CI 95%: 0.25 to 0.31), respectively. Statistical analysis using the Wilcoxon test revealed significant differences between the three approaches, with FTL outperforming CeBa and CeZe (p-value < 0.05) in the clean test set. The qualitative assessment demonstrated that FTL-ASC significantly improved image quality and diagnostic confidence and decreased image artefacts, compared to CT-ASC in 68Ga-PET imaging. In addition, mismatch and halo artefacts were successfully detected and disentangled in the chest, abdomen and pelvic regions in 68Ga-PET imaging. CONCLUSION: The proposed approach benefits from using large datasets from multiple centres while preserving patient privacy. Qualitative assessment by nuclear medicine physicians showed that the proposed model correctly addressed two main challenging artefacts in 68Ga-PET imaging. This technique could be integrated in the clinic for 68Ga-PET imaging artefact detection and disentanglement using multicentric heterogeneous datasets.


Asunto(s)
Tomografía Computarizada por Tomografía de Emisión de Positrones , Neoplasias de la Próstata , Masculino , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Artefactos , Radioisótopos de Galio , Privacidad , Tomografía de Emisión de Positrones/métodos , Aprendizaje Automático , Procesamiento de Imagen Asistido por Computador/métodos
10.
Sci Rep ; 13(1): 14920, 2023 09 10.
Artículo en Inglés | MEDLINE | ID: mdl-37691039

RESUMEN

This study aimed to investigate the diagnostic performance of machine learning-based radiomics analysis to diagnose coronary artery disease status and risk from rest/stress Myocardial Perfusion Imaging (MPI) single-photon emission computed tomography (SPECT). A total of 395 patients suspicious of coronary artery disease who underwent 2-day stress-rest protocol MPI SPECT were enrolled in this study. The left ventricle myocardium, excluding the cardiac cavity, was manually delineated on rest and stress images to define a volume of interest. Added to clinical features (age, sex, family history, diabetes status, smoking, and ejection fraction), a total of 118 radiomics features, were extracted from rest and stress MPI SPECT images to establish different feature sets, including Rest-, Stress-, Delta-, and Combined-radiomics (all together) feature sets. The data were randomly divided into 80% and 20% subsets for training and testing, respectively. The performance of classifiers built from combinations of three feature selections, and nine machine learning algorithms was evaluated for two different diagnostic tasks, including 1) normal/abnormal (no CAD vs. CAD) classification, and 2) low-risk/high-risk CAD classification. Different metrics, including the area under the ROC curve (AUC), accuracy (ACC), sensitivity (SEN), and specificity (SPE), were reported for models' evaluation. Overall, models built on the Stress feature set (compared to other feature sets), and models to diagnose the second task (compared to task 1 models) revealed better performance. The Stress-mRMR-KNN (feature set-feature selection-classifier) reached the highest performance for task 1 with AUC, ACC, SEN, and SPE equal to 0.61, 0.63, 0.64, and 0.6, respectively. The Stress-Boruta-GB model achieved the highest performance for task 2 with AUC, ACC, SEN, and SPE of 0.79, 0.76, 0.75, and 0.76, respectively. Diabetes status from the clinical feature family, and dependence count non-uniformity normalized, from the NGLDM family, which is representative of non-uniformity in the region of interest were the most frequently selected features from stress feature set for CAD risk classification. This study revealed promising results for CAD risk classification using machine learning models built on MPI SPECT radiomics. The proposed models are helpful to alleviate the labor-intensive MPI SPECT interpretation process regarding CAD status and can potentially expedite the diagnostic process.


Asunto(s)
Enfermedad de la Arteria Coronaria , Diabetes Mellitus , Imagen de Perfusión Miocárdica , Humanos , Enfermedad de la Arteria Coronaria/diagnóstico por imagen , Aprendizaje Automático , Tomografía Computarizada de Emisión de Fotón Único , Masculino , Femenino
11.
EJNMMI Res ; 13(1): 63, 2023 Jul 03.
Artículo en Inglés | MEDLINE | ID: mdl-37395912

RESUMEN

BACKGROUND: Selective internal radiation therapy with 90Y radioembolization aims to selectively irradiate liver tumours by administering radioactive microspheres under the theragnostic assumption that the pre-therapy injection of 99mTc labelled macroaggregated albumin (99mTc-MAA) provides an estimation of the 90Y microspheres biodistribution, which is not always the case. Due to the growing interest in theragnostic dosimetry for personalized radionuclide therapy, a robust relationship between the delivered and pre-treatment radiation absorbed doses is required. In this work, we aim to investigate the predictive value of absorbed dose metrics calculated from 99mTc-MAA (simulation) compared to those obtained from 90Y post-therapy SPECT/CT. RESULTS: A total of 79 patients were analysed. Pre- and post-therapy 3D-voxel dosimetry was calculated on 99mTc-MAA and 90Y SPECT/CT, respectively, based on Local Deposition Method. Mean absorbed dose, tumour-to-normal ratio, and absorbed dose distribution in terms of dose-volume histogram (DVH) metrics were obtained and compared for each volume of interest (VOI). Mann-Whitney U-test and Pearson's correlation coefficient were used to assess the correlation between both methods. The effect of the tumoral liver volume on the absorbed dose metrics was also investigated. Strong correlation was found between simulation and therapy mean absorbed doses for all VOIs, although simulation tended to overestimate tumour absorbed doses by 26%. DVH metrics showed good correlation too, but significant differences were found for several metrics, mostly on non-tumoral liver. It was observed that the tumoral liver volume does not significantly affect the differences between simulation and therapy absorbed dose metrics. CONCLUSION: This study supports the strong correlation between absorbed dose metrics from simulation and therapy dosimetry based on 90Y SPECT/CT, highlighting the predictive ability of 99mTc-MAA, not only in terms of mean absorbed dose but also of the dose distribution.

12.
Comput Methods Programs Biomed ; 240: 107706, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37506602

RESUMEN

BACKGROUND AND OBJECTIVE: Generalizable and trustworthy deep learning models for PET/CT image segmentation necessitates large diverse multi-institutional datasets. However, legal, ethical, and patient privacy issues challenge sharing of datasets between different centers. To overcome these challenges, we developed a federated learning (FL) framework for multi-institutional PET/CT image segmentation. METHODS: A dataset consisting of 328 FL (HN) cancer patients who underwent clinical PET/CT examinations gathered from six different centers was enrolled. A pure transformer network was implemented as fully core segmentation algorithms using dual channel PET/CT images. We evaluated different frameworks (single center-based, centralized baseline, as well as seven different FL algorithms) using 68 PET/CT images (20% of each center data). In particular, the implemented FL algorithms include clipping with the quantile estimator (ClQu), zeroing with the quantile estimator (ZeQu), federated averaging (FedAvg), lossy compression (LoCo), robust aggregation (RoAg), secure aggregation (SeAg), and Gaussian differentially private FedAvg with adaptive quantile clipping (GDP-AQuCl). RESULTS: The Dice coefficient was 0.80±0.11 for both centralized and SeAg FL algorithms. All FL approaches achieved centralized learning model performance with no statistically significant differences. Among the FL algorithms, SeAg and GDP-AQuCl performed better than the other techniques. However, there was no statistically significant difference. All algorithms, except the center-based approach, resulted in relative errors less than 5% for SUVmax and SUVmean for all FL and centralized methods. Centralized and FL algorithms significantly outperformed the single center-based baseline. CONCLUSIONS: The developed FL-based (with centralized method performance) algorithms exhibited promising performance for HN tumor segmentation from PET/CT images.


Asunto(s)
Aprendizaje Profundo , Neoplasias , Humanos , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias/diagnóstico por imagen , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos
13.
Eur Radiol ; 33(12): 9411-9424, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37368113

RESUMEN

OBJECTIVE: We propose a deep learning-guided approach to generate voxel-based absorbed dose maps from whole-body CT acquisitions. METHODS: The voxel-wise dose maps corresponding to each source position/angle were calculated using Monte Carlo (MC) simulations considering patient- and scanner-specific characteristics (SP_MC). The dose distribution in a uniform cylinder was computed through MC calculations (SP_uniform). The density map and SP_uniform dose maps were fed into a residual deep neural network (DNN) to predict SP_MC through an image regression task. The whole-body dose maps reconstructed by the DNN and MC were compared in the 11 test cases scanned with two tube voltages through transfer learning with/without tube current modulation (TCM). The voxel-wise and organ-wise dose evaluations, such as mean error (ME, mGy), mean absolute error (MAE, mGy), relative error (RE, %), and relative absolute error (RAE, %), were performed. RESULTS: The model performance for the 120 kVp and TCM test set in terms of ME, MAE, RE, and RAE voxel-wise parameters was - 0.0302 ± 0.0244 mGy, 0.0854 ± 0.0279 mGy, - 1.13 ± 1.41%, and 7.17 ± 0.44%, respectively. The organ-wise errors for 120 kVp and TCM scenario averaged over all segmented organs in terms of ME, MAE, RE, and RAE were - 0.144 ± 0.342 mGy, and 0.23 ± 0.28 mGy, - 1.11 ± 2.90%, 2.34 ± 2.03%, respectively. CONCLUSION: Our proposed deep learning model is able to generate voxel-level dose maps from a whole-body CT scan with reasonable accuracy suitable for organ-level absorbed dose estimation. CLINICAL RELEVANCE STATEMENT: We proposed a novel method for voxel dose map calculation using deep neural networks. This work is clinically relevant since accurate dose calculation for patients can be carried out within acceptable computational time compared to lengthy Monte Carlo calculations. KEY POINTS: • We proposed a deep neural network approach as an alternative to Monte Carlo dose calculation. • Our proposed deep learning model is able to generate voxel-level dose maps from a whole-body CT scan with reasonable accuracy, suitable for organ-level dose estimation. • By generating a dose distribution from a single source position, our model can generate accurate and personalized dose maps for a wide range of acquisition parameters.


Asunto(s)
Redes Neurales de la Computación , Imagen de Cuerpo Entero , Humanos , Fantasmas de Imagen , Método de Montecarlo , Tomografía Computarizada por Rayos X , Dosis de Radiación
14.
Head Neck Tumor Chall (2022) ; 13626: 1-30, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37195050

RESUMEN

This paper presents an overview of the third edition of the HEad and neCK TumOR segmentation and outcome prediction (HECKTOR) challenge, organized as a satellite event of the 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2022. The challenge comprises two tasks related to the automatic analysis of FDG-PET/CT images for patients with Head and Neck cancer (H&N), focusing on the oropharynx region. Task 1 is the fully automatic segmentation of H&N primary Gross Tumor Volume (GTVp) and metastatic lymph nodes (GTVn) from FDG-PET/CT images. Task 2 is the fully automatic prediction of Recurrence-Free Survival (RFS) from the same FDG-PET/CT and clinical data. The data were collected from nine centers for a total of 883 cases consisting of FDG-PET/CT images and clinical information, split into 524 training and 359 test cases. The best methods obtained an aggregated Dice Similarity Coefficient (DSCagg) of 0.788 in Task 1, and a Concordance index (C-index) of 0.682 in Task 2.

15.
Z Med Phys ; 2023 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-36932023

RESUMEN

PURPOSE: Whole-body bone scintigraphy (WBS) is one of the most widely used modalities in diagnosing malignant bone diseases during the early stages. However, the procedure is time-consuming and requires vigour and experience. Moreover, interpretation of WBS scans in the early stages of the disorders might be challenging because the patterns often reflect normal appearance that is prone to subjective interpretation. To simplify the gruelling, subjective, and prone-to-error task of interpreting WBS scans, we developed deep learning (DL) models to automate two major analyses, namely (i) classification of scans into normal and abnormal and (ii) discrimination between malignant and non-neoplastic bone diseases, and compared their performance with human observers. MATERIALS AND METHODS: After applying our exclusion criteria on 7188 patients from three different centers, 3772 and 2248 patients were enrolled for the first and second analyses, respectively. Data were split into two parts, including training and testing, while a fraction of training data were considered for validation. Ten different CNN models were applied to single- and dual-view input (posterior and anterior views) modes to find the optimal model for each analysis. In addition, three different methods, including squeeze-and-excitation (SE), spatial pyramid pooling (SPP), and attention-augmented (AA), were used to aggregate the features for dual-view input models. Model performance was reported through area under the receiver operating characteristic (ROC) curve (AUC), accuracy, sensitivity, and specificity and was compared with the DeLong test applied to ROC curves. The test dataset was evaluated by three nuclear medicine physicians (NMPs) with different levels of experience to compare the performance of AI and human observers. RESULTS: DenseNet121_AA (DensNet121, with dual-view input aggregated by AA) and InceptionResNetV2_SPP achieved the highest performance (AUC = 0.72) for the first and second analyses, respectively. Moreover, on average, in the first analysis, Inception V3 and InceptionResNetV2 CNN models and dual-view input with AA aggregating method had superior performance. In addition, in the second analysis, DenseNet121 and InceptionResNetV2 as CNN methods and dual-view input with AA aggregating method achieved the best results. Conversely, the performance of AI models was significantly higher than human observers for the first analysis, whereas their performance was comparable in the second analysis, although the AI model assessed the scans in a drastically lower time. CONCLUSION: Using the models designed in this study, a positive step can be taken toward improving and optimizing WBS interpretation. By training DL models with larger and more diverse cohorts, AI could potentially be used to assist physicians in the assessment of WBS images.

16.
Eur J Nucl Med Mol Imaging ; 50(7): 1881-1896, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36808000

RESUMEN

PURPOSE: Partial volume effect (PVE) is a consequence of the limited spatial resolution of PET scanners. PVE can cause the intensity values of a particular voxel to be underestimated or overestimated due to the effect of surrounding tracer uptake. We propose a novel partial volume correction (PVC) technique to overcome the adverse effects of PVE on PET images. METHODS: Two hundred and twelve clinical brain PET scans, including 50 18F-Fluorodeoxyglucose (18F-FDG), 50 18F-Flortaucipir, 36 18F-Flutemetamol, and 76 18F-FluoroDOPA, and their corresponding T1-weighted MR images were enrolled in this study. The Iterative Yang technique was used for PVC as a reference or surrogate of the ground truth for evaluation. A cycle-consistent adversarial network (CycleGAN) was trained to directly map non-PVC PET images to PVC PET images. Quantitative analysis using various metrics, including structural similarity index (SSIM), root mean squared error (RMSE), and peak signal-to-noise ratio (PSNR), was performed. Furthermore, voxel-wise and region-wise-based correlations of activity concentration between the predicted and reference images were evaluated through joint histogram and Bland and Altman analysis. In addition, radiomic analysis was performed by calculating 20 radiomic features within 83 brain regions. Finally, a voxel-wise two-sample t-test was used to compare the predicted PVC PET images with reference PVC images for each radiotracer. RESULTS: The Bland and Altman analysis showed the largest and smallest variance for 18F-FDG (95% CI: - 0.29, + 0.33 SUV, mean = 0.02 SUV) and 18F-Flutemetamol (95% CI: - 0.26, + 0.24 SUV, mean = - 0.01 SUV), respectively. The PSNR was lowest (29.64 ± 1.13 dB) for 18F-FDG and highest (36.01 ± 3.26 dB) for 18F-Flutemetamol. The smallest and largest SSIM were achieved for 18F-FDG (0.93 ± 0.01) and 18F-Flutemetamol (0.97 ± 0.01), respectively. The average relative error for the kurtosis radiomic feature was 3.32%, 9.39%, 4.17%, and 4.55%, while it was 4.74%, 8.80%, 7.27%, and 6.81% for NGLDM_contrast feature for 18F-Flutemetamol, 18F-FluoroDOPA, 18F-FDG, and 18F-Flortaucipir, respectively. CONCLUSION: An end-to-end CycleGAN PVC method was developed and evaluated. Our model generates PVC images from the original non-PVC PET images without requiring additional anatomical information, such as MRI or CT. Our model eliminates the need for accurate registration or segmentation or PET scanner system response characterization. In addition, no assumptions regarding anatomical structure size, homogeneity, boundary, or background level are required.


Asunto(s)
Compuestos de Anilina , Fluorodesoxiglucosa F18 , Humanos , Tomografía de Emisión de Positrones/métodos , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
17.
Eur Radiol ; 33(5): 3243-3252, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36703015

RESUMEN

OBJECTIVES: This study aimed to improve patient positioning accuracy by relying on a CT localizer and a deep neural network to optimize image quality and radiation dose. METHODS: We included 5754 chest CT axial and anterior-posterior (AP) images from two different centers, C1 and C2. After pre-processing, images were split into training (80%) and test (20%) datasets. A deep neural network was trained to generate 3D axial images from the AP localizer. The geometric centerlines of patient bodies were indicated by creating a bounding box on the predicted images. The distance between the body centerline, estimated by the deep learning model and ground truth (BCAP), was compared with patient mis-centering during manual positioning (BCMP). We evaluated the performance of our model in terms of distance between the lung centerline estimated by the deep learning model and the ground truth (LCAP). RESULTS: The error in terms of BCAP was - 0.75 ± 7.73 mm and 2.06 ± 10.61 mm for C1 and C2, respectively. This error was significantly lower than BCMP, which achieved an error of 9.35 ± 14.94 and 13.98 ± 14.5 mm for C1 and C2, respectively. The absolute BCAP was 5.7 ± 5.26 and 8.26 ± 6.96 mm for C1 and C2, respectively. The LCAP metric was 1.56 ± 10.8 and -0.27 ± 16.29 mm for C1 and C2, respectively. The error in terms of BCAP and LCAP was higher for larger patients (p value < 0.01). CONCLUSION: The accuracy of the proposed method was comparable to available alternative methods, carrying the advantage of being free from errors related to objects blocking the camera visibility. KEY POINTS: • Patient mis-centering in the anterior-posterior direction (AP) is a common problem in clinical practice which can degrade image quality and increase patient radiation dose. • We proposed a deep neural network for automatic patient positioning using only the CT image localizer, achieving a performance comparable to alternative techniques, such as the external 3D visual camera. • The advantage of the proposed method is that it is free from errors related to objects blocking the camera visibility and that it could be implemented on imaging consoles as a patient positioning support tool.


Asunto(s)
Redes Neurales de la Computación , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Imagenología Tridimensional , Posicionamiento del Paciente/métodos , Procesamiento de Imagen Asistido por Computador/métodos
18.
Eur J Nucl Med Mol Imaging ; 50(4): 1034-1050, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36508026

RESUMEN

PURPOSE: Attenuation correction and scatter compensation (AC/SC) are two main steps toward quantitative PET imaging, which remain challenging in PET-only and PET/MRI systems. These can be effectively tackled via deep learning (DL) methods. However, trustworthy, and generalizable DL models commonly require well-curated, heterogeneous, and large datasets from multiple clinical centers. At the same time, owing to legal/ethical issues and privacy concerns, forming a large collective, centralized dataset poses significant challenges. In this work, we aimed to develop a DL-based model in a multicenter setting without direct sharing of data using federated learning (FL) for AC/SC of PET images. METHODS: Non-attenuation/scatter corrected and CT-based attenuation/scatter corrected (CT-ASC) 18F-FDG PET images of 300 patients were enrolled in this study. The dataset consisted of 6 different centers, each with 50 patients, with scanner, image acquisition, and reconstruction protocols varying across the centers. CT-based ASC PET images served as the standard reference. All images were reviewed to include high-quality and artifact-free PET images. Both corrected and uncorrected PET images were converted to standardized uptake values (SUVs). We used a modified nested U-Net utilizing residual U-block in a U-shape architecture. We evaluated two FL models, namely sequential (FL-SQ) and parallel (FL-PL) and compared their performance with the baseline centralized (CZ) learning model wherein the data were pooled to one server, as well as center-based (CB) models where for each center the model was built and evaluated separately. Data from each center were divided to contribute to training (30 patients), validation (10 patients), and test sets (10 patients). Final evaluations and reports were performed on 60 patients (10 patients from each center). RESULTS: In terms of percent SUV absolute relative error (ARE%), both FL-SQ (CI:12.21-14.81%) and FL-PL (CI:11.82-13.84%) models demonstrated excellent agreement with the centralized framework (CI:10.32-12.00%), while FL-based algorithms improved model performance by over 11% compared to CB training strategy (CI: 22.34-26.10%). Furthermore, the Mann-Whitney test between different strategies revealed no significant differences between CZ and FL-based algorithms (p-value > 0.05) in center-categorized mode. At the same time, a significant difference was observed between the different training approaches on the overall dataset (p-value < 0.05). In addition, voxel-wise comparison, with respect to reference CT-ASC, exhibited similar performance for images predicted by CZ (R2 = 0.94), FL-SQ (R2 = 0.93), and FL-PL (R2 = 0.92), while CB model achieved a far lower coefficient of determination (R2 = 0.74). Despite the strong correlations between CZ and FL-based methods compared to reference CT-ASC, a slight underestimation of predicted voxel values was observed. CONCLUSION: Deep learning-based models provide promising results toward quantitative PET image reconstruction. Specifically, we developed two FL models and compared their performance with center-based and centralized models. The proposed FL-based models achieved higher performance compared to center-based models, comparable with centralized models. Our work provided strong empirical evidence that the FL framework can fully benefit from the generalizability and robustness of DL models used for AC/SC in PET, while obviating the need for the direct sharing of datasets between clinical imaging centers.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Tomografía de Emisión de Positrones , Tomografía de Emisión de Positrones/métodos , Imagen por Resonancia Magnética/métodos
19.
Cancer Biother Radiopharm ; 38(10): 663-669, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-36576502

RESUMEN

Purpose: Nonalcoholic fatty liver disease (NAFLD) is the most common chronic hepatic disease worldwide, with functional impairment of the mitochondria occurring from early stages. Technetium-99m methoxy-isobutyl-isonitrile (99mTc-MIBI) is a lipophilic agent trapped in the mitochondria. This study aims to evaluate the utility of 99mTc-MIBI heart/liver uptake ratio in screening for NAFLD during myocardial perfusion imaging (MPI). Methods: Seventy eligible patients underwent a 2-d rest/stress 99mTc-MIBI scan with a 2-min planar image acquired in rest phase, at 30, 60, and 120 min postradiotracer administration. Heart/liver uptake ratio was calculated by placing identical regions of interest on the heart and liver dome. All patients underwent liver ultrasound and were allocated into groups A, having NAFLD; and B, healthy individuals without NAFLD. Results: Mean count per pixel heart/liver ratios gradually increased over time in either group; nonetheless the values were significantly higher in group A, regardless of acquisition timing; with the p-value equal to 0.007, 0.014, and 0.010 at 30, 60, and 120 min, respectively. Conclusion: Determining 99mTc-MIBI heart/liver uptake ratio during rest phase in patients undergoing MPI may be a useful, noninvasive screening method for NAFLD; with no additional cost, radiation burden, or adverse effects in these patients. Trial registration number: IR.SBMU.MSP.REC.1398.308.


Asunto(s)
Imagen de Perfusión Miocárdica , Enfermedad del Hígado Graso no Alcohólico , Humanos , Enfermedad del Hígado Graso no Alcohólico/diagnóstico por imagen , Tecnecio Tc 99m Sestamibi , Corazón/diagnóstico por imagen , Tecnecio , Tomografía Computarizada de Emisión de Fotón Único/métodos
20.
Appl Radiat Isot ; 193: 110628, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36577360

RESUMEN

The incidence and mortality (per 100,000) rates in chest CT are highest for the lungs and breasts (incidence: lung = 116, breast = 98.64; mortality: lung = 113.43, breast = 49.72). Abdominopelvic CT scans showed the highest incidence for stomach (79.57), colon (62.86), bladder (48.69), and liver (28.63), respectively. Mortality is highest for the bladder (80.44), stomach (72.43), colon (69.02), and liver (63.78), respectively. This study helps to better understand the concept of radiation dose and the numbers reported as organ dose and effective dose and identify the probability of the stochastic effect.


Asunto(s)
Tórax , Tomografía Computarizada por Rayos X , Humanos , Dosis de Radiación , Tomografía Computarizada por Rayos X/métodos , Abdomen/diagnóstico por imagen , Mama
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...