Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
Eur J Nucl Med Mol Imaging ; 51(1): 40-53, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37682303

RESUMEN

PURPOSE: Image artefacts continue to pose challenges in clinical molecular imaging, resulting in misdiagnoses, additional radiation doses to patients and financial costs. Mismatch and halo artefacts occur frequently in gallium-68 (68Ga)-labelled compounds whole-body PET/CT imaging. Correcting for these artefacts is not straightforward and requires algorithmic developments, given that conventional techniques have failed to address them adequately. In the current study, we employed differential privacy-preserving federated transfer learning (FTL) to manage clinical data sharing and tackle privacy issues for building centre-specific models that detect and correct artefacts present in PET images. METHODS: Altogether, 1413 patients with 68Ga prostate-specific membrane antigen (PSMA)/DOTA-TATE (TOC) PET/CT scans from 3 countries, including 8 different centres, were enrolled in this study. CT-based attenuation and scatter correction (CT-ASC) was used in all centres for quantitative PET reconstruction. Prior to model training, an experienced nuclear medicine physician reviewed all images to ensure the use of high-quality, artefact-free PET images (421 patients' images). A deep neural network (modified U2Net) was trained on 80% of the artefact-free PET images to utilize centre-based (CeBa), centralized (CeZe) and the proposed differential privacy FTL frameworks. Quantitative analysis was performed in 20% of the clean data (with no artefacts) in each centre. A panel of two nuclear medicine physicians conducted qualitative assessment of image quality, diagnostic confidence and image artefacts in 128 patients with artefacts (256 images for CT-ASC and FTL-ASC). RESULTS: The three approaches investigated in this study for 68Ga-PET imaging (CeBa, CeZe and FTL) resulted in a mean absolute error (MAE) of 0.42 ± 0.21 (CI 95%: 0.38 to 0.47), 0.32 ± 0.23 (CI 95%: 0.27 to 0.37) and 0.28 ± 0.15 (CI 95%: 0.25 to 0.31), respectively. Statistical analysis using the Wilcoxon test revealed significant differences between the three approaches, with FTL outperforming CeBa and CeZe (p-value < 0.05) in the clean test set. The qualitative assessment demonstrated that FTL-ASC significantly improved image quality and diagnostic confidence and decreased image artefacts, compared to CT-ASC in 68Ga-PET imaging. In addition, mismatch and halo artefacts were successfully detected and disentangled in the chest, abdomen and pelvic regions in 68Ga-PET imaging. CONCLUSION: The proposed approach benefits from using large datasets from multiple centres while preserving patient privacy. Qualitative assessment by nuclear medicine physicians showed that the proposed model correctly addressed two main challenging artefacts in 68Ga-PET imaging. This technique could be integrated in the clinic for 68Ga-PET imaging artefact detection and disentanglement using multicentric heterogeneous datasets.


Asunto(s)
Tomografía Computarizada por Tomografía de Emisión de Positrones , Neoplasias de la Próstata , Masculino , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Artefactos , Radioisótopos de Galio , Privacidad , Tomografía de Emisión de Positrones/métodos , Aprendizaje Automático , Procesamiento de Imagen Asistido por Computador/métodos
2.
Phys Eng Sci Med ; 47(2): 741-753, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38526647

RESUMEN

Early diagnosis of prostate cancer, the most common malignancy in men, can improve patient outcomes. Since the tissue sampling procedures are invasive and sometimes inconclusive, an alternative image-based method can prevent possible complications and facilitate treatment management. We aim to propose a machine-learning model for tumor grade estimation based on 68 Ga-PSMA-11 PET/CT images in prostate cancer patients. This study included 90 eligible participants out of 244 biopsy-proven prostate cancer patients who underwent staging 68Ga-PSMA-11 PET/CT imaging. The patients were divided into high and low-intermediate groups based on their Gleason scores. The PET-only images were manually segmented, both lesion-based and whole prostate, by two experienced nuclear medicine physicians. Four feature selection algorithms and five classifiers were applied to Combat-harmonized and non-harmonized datasets. To evaluate the model's generalizability across different institutions, we performed leave-one-center-out cross-validation (LOOCV). The metrics derived from the receiver operating characteristic curve were used to assess model performance. In the whole prostate segmentation, combining the ANOVA algorithm as the feature selector with Random Forest (RF) and Extra Trees (ET) classifiers resulted in the highest performance among the models, with an AUC of 0.78 and 083, respectively. In the lesion-based segmentation, the highest AUC was achieved by MRMR feature selector + Linear Discriminant Analysis (LDA) and Logistic Regression (LR) classifiers (0.76 and 0.79, respectively). The LOOCV results revealed that both the RF_ANOVA and ET_ANOVA models showed high levels of accuracy and generalizability across different centers, with an average AUC value of 0.87 for the ET_ANOVA combination. Machine learning-based analysis of radiomics features extracted from 68Ga-PSMA-11 PET/CT scans can accurately classify prostate tumors into low-risk and intermediate- to high-risk groups.


Asunto(s)
Isótopos de Galio , Radioisótopos de Galio , Aprendizaje Automático , Clasificación del Tumor , Tomografía Computarizada por Tomografía de Emisión de Positrones , Neoplasias de la Próstata , Humanos , Masculino , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/patología , Anciano , Persona de Mediana Edad , Procesamiento de Imagen Asistido por Computador , Curva ROC , Ácido Edético/análogos & derivados , Oligopéptidos/química
3.
Med Phys ; 51(1): 319-333, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37475591

RESUMEN

BACKGROUND: PET/CT images combining anatomic and metabolic data provide complementary information that can improve clinical task performance. PET image segmentation algorithms exploiting the multi-modal information available are still lacking. PURPOSE: Our study aimed to assess the performance of PET and CT image fusion for gross tumor volume (GTV) segmentations of head and neck cancers (HNCs) utilizing conventional, deep learning (DL), and output-level voting-based fusions. METHODS: The current study is based on a total of 328 histologically confirmed HNCs from six different centers. The images were automatically cropped to a 200 × 200 head and neck region box, and CT and PET images were normalized for further processing. Eighteen conventional image-level fusions were implemented. In addition, a modified U2-Net architecture as DL fusion model baseline was used. Three different input, layer, and decision-level information fusions were used. Simultaneous truth and performance level estimation (STAPLE) and majority voting to merge different segmentation outputs (from PET and image-level and network-level fusions), that is, output-level information fusion (voting-based fusions) were employed. Different networks were trained in a 2D manner with a batch size of 64. Twenty percent of the dataset with stratification concerning the centers (20% in each center) were used for final result reporting. Different standard segmentation metrics and conventional PET metrics, such as SUV, were calculated. RESULTS: In single modalities, PET had a reasonable performance with a Dice score of 0.77 ± 0.09, while CT did not perform acceptably and reached a Dice score of only 0.38 ± 0.22. Conventional fusion algorithms obtained a Dice score range of [0.76-0.81] with guided-filter-based context enhancement (GFCE) at the low-end, and anisotropic diffusion and Karhunen-Loeve transform fusion (ADF), multi-resolution singular value decomposition (MSVD), and multi-level image decomposition based on latent low-rank representation (MDLatLRR) at the high-end. All DL fusion models achieved Dice scores of 0.80. Output-level voting-based models outperformed all other models, achieving superior results with a Dice score of 0.84 for Majority_ImgFus, Majority_All, and Majority_Fast. A mean error of almost zero was achieved for all fusions using SUVpeak , SUVmean and SUVmedian . CONCLUSION: PET/CT information fusion adds significant value to segmentation tasks, considerably outperforming PET-only and CT-only methods. In addition, both conventional image-level and DL fusions achieve competitive results. Meanwhile, output-level voting-based fusion using majority voting of several algorithms results in statistically significant improvements in the segmentation of HNC.


Asunto(s)
Neoplasias de Cabeza y Cuello , Tomografía Computarizada por Tomografía de Emisión de Positrones , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Algoritmos , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
4.
Endocrine ; 82(2): 326-334, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37291392

RESUMEN

OBJECTIVES: This study aims to use ultrasound derived features as biomarkers to assess the malignancy of thyroid nodules in patients who were candidates for FNA according to the ACR TI-RADS guidelines. METHODS: Two hundred and ten patients who met the selection criteria were enrolled in the study and subjected to ultrasound-guided FNA of thyroid nodules. Different radiomics features were extracted from sonographic images, including intensity, shape, and texture feature sets. Least Absolute Shrinkage and Selection Operator (LASSO), Minimum Redundancy Maximum Relevance (MRMR), and Random Forests/Extreme Gradient Boosting Machine (XGBoost) algorithms were used for feature selection and classification of the univariate and multivariate modeling, respectively. Evaluation of models performed using accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). RESULTS: In the univariate analysis, Gray Level Run Length Matrix - Run-Length Non-Uniformity (GLRLM-RLNU) and gray-level zone length matrix - Run-Length Non-Uniformity (GLZLM-GLNU) (both with an AUC of 0.67) were top-performing for predicting nodules malignancy. In the multivariate analysis of the training dataset, the AUC of all combinations of feature selection algorithms and classifiers was 0.99, and the highest sensitivity was for XGBoost classifier and MRMR feature selection algorithms (0.99). Finally, the test dataset was used to evaluate our model in which XGBoost classifier with MRMR and LASSO feature selection algorithms had the highest performance (AUC = 0.95). CONCLUSIONS: Ultrasound-extracted features can be used as non-invasive biomarkers for thyroid nodules' malignancy prediction.


Asunto(s)
Neoplasias de la Tiroides , Nódulo Tiroideo , Humanos , Nódulo Tiroideo/diagnóstico por imagen , Nódulo Tiroideo/patología , Neoplasias de la Tiroides/diagnóstico por imagen , Neoplasias de la Tiroides/patología , Ultrasonografía/métodos , Aprendizaje Automático , Biomarcadores , Estudios Retrospectivos
5.
Z Med Phys ; 2023 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-36932023

RESUMEN

PURPOSE: Whole-body bone scintigraphy (WBS) is one of the most widely used modalities in diagnosing malignant bone diseases during the early stages. However, the procedure is time-consuming and requires vigour and experience. Moreover, interpretation of WBS scans in the early stages of the disorders might be challenging because the patterns often reflect normal appearance that is prone to subjective interpretation. To simplify the gruelling, subjective, and prone-to-error task of interpreting WBS scans, we developed deep learning (DL) models to automate two major analyses, namely (i) classification of scans into normal and abnormal and (ii) discrimination between malignant and non-neoplastic bone diseases, and compared their performance with human observers. MATERIALS AND METHODS: After applying our exclusion criteria on 7188 patients from three different centers, 3772 and 2248 patients were enrolled for the first and second analyses, respectively. Data were split into two parts, including training and testing, while a fraction of training data were considered for validation. Ten different CNN models were applied to single- and dual-view input (posterior and anterior views) modes to find the optimal model for each analysis. In addition, three different methods, including squeeze-and-excitation (SE), spatial pyramid pooling (SPP), and attention-augmented (AA), were used to aggregate the features for dual-view input models. Model performance was reported through area under the receiver operating characteristic (ROC) curve (AUC), accuracy, sensitivity, and specificity and was compared with the DeLong test applied to ROC curves. The test dataset was evaluated by three nuclear medicine physicians (NMPs) with different levels of experience to compare the performance of AI and human observers. RESULTS: DenseNet121_AA (DensNet121, with dual-view input aggregated by AA) and InceptionResNetV2_SPP achieved the highest performance (AUC = 0.72) for the first and second analyses, respectively. Moreover, on average, in the first analysis, Inception V3 and InceptionResNetV2 CNN models and dual-view input with AA aggregating method had superior performance. In addition, in the second analysis, DenseNet121 and InceptionResNetV2 as CNN methods and dual-view input with AA aggregating method achieved the best results. Conversely, the performance of AI models was significantly higher than human observers for the first analysis, whereas their performance was comparable in the second analysis, although the AI model assessed the scans in a drastically lower time. CONCLUSION: Using the models designed in this study, a positive step can be taken toward improving and optimizing WBS interpretation. By training DL models with larger and more diverse cohorts, AI could potentially be used to assist physicians in the assessment of WBS images.

6.
Comput Biol Med ; 145: 105467, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35378436

RESUMEN

BACKGROUND: We aimed to analyze the prognostic power of CT-based radiomics models using data of 14,339 COVID-19 patients. METHODS: Whole lung segmentations were performed automatically using a deep learning-based model to extract 107 intensity and texture radiomics features. We used four feature selection algorithms and seven classifiers. We evaluated the models using ten different splitting and cross-validation strategies, including non-harmonized and ComBat-harmonized datasets. The sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were reported. RESULTS: In the test dataset (4,301) consisting of CT and/or RT-PCR positive cases, AUC, sensitivity, and specificity of 0.83 ± 0.01 (CI95%: 0.81-0.85), 0.81, and 0.72, respectively, were obtained by ANOVA feature selector + Random Forest (RF) classifier. Similar results were achieved in RT-PCR-only positive test sets (3,644). In ComBat harmonized dataset, Relief feature selector + RF classifier resulted in the highest performance of AUC, reaching 0.83 ± 0.01 (CI95%: 0.81-0.85), with a sensitivity and specificity of 0.77 and 0.74, respectively. ComBat harmonization did not depict statistically significant improvement compared to a non-harmonized dataset. In leave-one-center-out, the combination of ANOVA feature selector and RF classifier resulted in the highest performance. CONCLUSION: Lung CT radiomics features can be used for robust prognostic modeling of COVID-19. The predictive power of the proposed CT radiomics model is more reliable when using a large multicentric heterogeneous dataset, and may be used prospectively in clinical setting to manage COVID-19 patients.


Asunto(s)
COVID-19 , Neoplasias Pulmonares , Algoritmos , COVID-19/diagnóstico por imagen , Humanos , Aprendizaje Automático , Pronóstico , Estudios Retrospectivos , Tomografía Computarizada por Rayos X/métodos
7.
Clin Nucl Med ; 46(11): 872-883, 2021 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-34238799

RESUMEN

PURPOSE: The availability of automated, accurate, and robust gross tumor volume (GTV) segmentation algorithms is critical for the management of head and neck cancer (HNC) patients. In this work, we evaluated 3 state-of-the-art deep learning algorithms combined with 8 different loss functions for PET image segmentation using a comprehensive training set and evaluated its performance on an external validation set of HNC patients. PATIENTS AND METHODS: 18F-FDG PET/CT images of 470 patients presenting with HNC on which manually defined GTVs serving as standard of reference were used for training (340 patients), evaluation (30 patients), and testing (100 patients from different centers) of these algorithms. PET image intensity was converted to SUVs and normalized in the range (0-1) using the SUVmax of the whole data set. PET images were cropped to 12 × 12 × 12 cm3 subvolumes using isotropic voxel spacing of 3 × 3 × 3 mm3 containing the whole tumor and neighboring background including lymph nodes. We used different approaches for data augmentation, including rotation (-15 degrees, +15 degrees), scaling (-20%, 20%), random flipping (3 axes), and elastic deformation (sigma = 1 and proportion to deform = 0.7) to increase the number of training sets. Three state-of-the-art networks, including Dense-VNet, NN-UNet, and Res-Net, with 8 different loss functions, including Dice, generalized Wasserstein Dice loss, Dice plus XEnt loss, generalized Dice loss, cross-entropy, sensitivity-specificity, and Tversky, were used. Overall, 28 different networks were built. Standard image segmentation metrics, including Dice similarity, image-derived PET metrics, first-order, and shape radiomic features, were used for performance assessment of these algorithms. RESULTS: The best results in terms of Dice coefficient (mean ± SD) were achieved by cross-entropy for Res-Net (0.86 ± 0.05; 95% confidence interval [CI], 0.85-0.87), Dense-VNet (0.85 ± 0.058; 95% CI, 0.84-0.86), and Dice plus XEnt for NN-UNet (0.87 ± 0.05; 95% CI, 0.86-0.88). The difference between the 3 networks was not statistically significant (P > 0.05). The percent relative error (RE%) of SUVmax quantification was less than 5% in networks with a Dice coefficient more than 0.84, whereas a lower RE% (0.41%) was achieved by Res-Net with cross-entropy loss. For maximum 3-dimensional diameter and sphericity shape features, all networks achieved a RE ≤ 5% and ≤10%, respectively, reflecting a small variability. CONCLUSIONS: Deep learning algorithms exhibited promising performance for automated GTV delineation on HNC PET images. Different loss functions performed competitively when using different networks and cross-entropy for Res-Net, Dense-VNet, and Dice plus XEnt for NN-UNet emerged as reliable networks for GTV delineation. Caution should be exercised for clinical deployment owing to the occurrence of outliers in deep learning-based algorithms.


Asunto(s)
Aprendizaje Profundo , Neoplasias de Cabeza y Cuello , Algoritmos , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Tomografía de Emisión de Positrones , Carga Tumoral
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA