Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
Med Phys ; 2024 Feb 09.
Artículo en Inglés | MEDLINE | ID: mdl-38335175

RESUMEN

BACKGROUND: Notwithstanding the encouraging results of previous studies reporting on the efficiency of deep learning (DL) in COVID-19 prognostication, clinical adoption of the developed methodology still needs to be improved. To overcome this limitation, we set out to predict the prognosis of a large multi-institutional cohort of patients with COVID-19 using a DL-based model. PURPOSE: This study aimed to evaluate the performance of deep privacy-preserving federated learning (DPFL) in predicting COVID-19 outcomes using chest CT images. METHODS: After applying inclusion and exclusion criteria, 3055 patients from 19 centers, including 1599 alive and 1456 deceased, were enrolled in this study. Data from all centers were split (randomly with stratification respective to each center and class) into a training/validation set (70%/10%) and a hold-out test set (20%). For the DL model, feature extraction was performed on 2D slices, and averaging was performed at the final layer to construct a 3D model for each scan. The DensNet model was used for feature extraction. The model was developed using centralized and FL approaches. For FL, we employed DPFL approaches. Membership inference attack was also evaluated in the FL strategy. For model evaluation, different metrics were reported in the hold-out test sets. In addition, models trained in two scenarios, centralized and FL, were compared using the DeLong test for statistical differences. RESULTS: The centralized model achieved an accuracy of 0.76, while the DPFL model had an accuracy of 0.75. Both the centralized and DPFL models achieved a specificity of 0.77. The centralized model achieved a sensitivity of 0.74, while the DPFL model had a sensitivity of 0.73. A mean AUC of 0.82 and 0.81 with 95% confidence intervals of (95% CI: 0.79-0.85) and (95% CI: 0.77-0.84) were achieved by the centralized model and the DPFL model, respectively. The DeLong test did not prove statistically significant differences between the two models (p-value = 0.98). The AUC values for the inference attacks fluctuate between 0.49 and 0.51, with an average of 0.50 ± 0.003 and 95% CI for the mean AUC of 0.500 to 0.501. CONCLUSION: The performance of the proposed model was comparable to centralized models while operating on large and heterogeneous multi-institutional datasets. In addition, the model was resistant to inference attacks, ensuring the privacy of shared data during the training process.

2.
J Biomed Inform ; 150: 104583, 2024 02.
Artículo en Inglés | MEDLINE | ID: mdl-38191010

RESUMEN

OBJECTIVE: The primary objective of our study is to address the challenge of confidentially sharing medical images across different centers. This is often a critical necessity in both clinical and research environments, yet restrictions typically exist due to privacy concerns. Our aim is to design a privacy-preserving data-sharing mechanism that allows medical images to be stored as encoded and obfuscated representations in the public domain without revealing any useful or recoverable content from the images. In tandem, we aim to provide authorized users with compact private keys that could be used to reconstruct the corresponding images. METHOD: Our approach involves utilizing a neural auto-encoder. The convolutional filter outputs are passed through sparsifying transformations to produce multiple compact codes. Each code is responsible for reconstructing different attributes of the image. The key privacy-preserving element in this process is obfuscation through the use of specific pseudo-random noise. When applied to the codes, it becomes computationally infeasible for an attacker to guess the correct representation for all the codes, thereby preserving the privacy of the images. RESULTS: The proposed framework was implemented and evaluated using chest X-ray images for different medical image analysis tasks, including classification, segmentation, and texture analysis. Additionally, we thoroughly assessed the robustness of our method against various attacks using both supervised and unsupervised algorithms. CONCLUSION: This study provides a novel, optimized, and privacy-assured data-sharing mechanism for medical images, enabling multi-party sharing in a secure manner. While we have demonstrated its effectiveness with chest X-ray images, the mechanism can be utilized in other medical images modalities as well.


Asunto(s)
Algoritmos , Privacidad , Difusión de la Información
3.
Eur J Nucl Med Mol Imaging ; 51(1): 40-53, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37682303

RESUMEN

PURPOSE: Image artefacts continue to pose challenges in clinical molecular imaging, resulting in misdiagnoses, additional radiation doses to patients and financial costs. Mismatch and halo artefacts occur frequently in gallium-68 (68Ga)-labelled compounds whole-body PET/CT imaging. Correcting for these artefacts is not straightforward and requires algorithmic developments, given that conventional techniques have failed to address them adequately. In the current study, we employed differential privacy-preserving federated transfer learning (FTL) to manage clinical data sharing and tackle privacy issues for building centre-specific models that detect and correct artefacts present in PET images. METHODS: Altogether, 1413 patients with 68Ga prostate-specific membrane antigen (PSMA)/DOTA-TATE (TOC) PET/CT scans from 3 countries, including 8 different centres, were enrolled in this study. CT-based attenuation and scatter correction (CT-ASC) was used in all centres for quantitative PET reconstruction. Prior to model training, an experienced nuclear medicine physician reviewed all images to ensure the use of high-quality, artefact-free PET images (421 patients' images). A deep neural network (modified U2Net) was trained on 80% of the artefact-free PET images to utilize centre-based (CeBa), centralized (CeZe) and the proposed differential privacy FTL frameworks. Quantitative analysis was performed in 20% of the clean data (with no artefacts) in each centre. A panel of two nuclear medicine physicians conducted qualitative assessment of image quality, diagnostic confidence and image artefacts in 128 patients with artefacts (256 images for CT-ASC and FTL-ASC). RESULTS: The three approaches investigated in this study for 68Ga-PET imaging (CeBa, CeZe and FTL) resulted in a mean absolute error (MAE) of 0.42 ± 0.21 (CI 95%: 0.38 to 0.47), 0.32 ± 0.23 (CI 95%: 0.27 to 0.37) and 0.28 ± 0.15 (CI 95%: 0.25 to 0.31), respectively. Statistical analysis using the Wilcoxon test revealed significant differences between the three approaches, with FTL outperforming CeBa and CeZe (p-value < 0.05) in the clean test set. The qualitative assessment demonstrated that FTL-ASC significantly improved image quality and diagnostic confidence and decreased image artefacts, compared to CT-ASC in 68Ga-PET imaging. In addition, mismatch and halo artefacts were successfully detected and disentangled in the chest, abdomen and pelvic regions in 68Ga-PET imaging. CONCLUSION: The proposed approach benefits from using large datasets from multiple centres while preserving patient privacy. Qualitative assessment by nuclear medicine physicians showed that the proposed model correctly addressed two main challenging artefacts in 68Ga-PET imaging. This technique could be integrated in the clinic for 68Ga-PET imaging artefact detection and disentanglement using multicentric heterogeneous datasets.


Asunto(s)
Tomografía Computarizada por Tomografía de Emisión de Positrones , Neoplasias de la Próstata , Masculino , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Artefactos , Radioisótopos de Galio , Privacidad , Tomografía de Emisión de Positrones/métodos , Aprendizaje Automático , Procesamiento de Imagen Asistido por Computador/métodos
4.
Comput Methods Programs Biomed ; 240: 107706, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37506602

RESUMEN

BACKGROUND AND OBJECTIVE: Generalizable and trustworthy deep learning models for PET/CT image segmentation necessitates large diverse multi-institutional datasets. However, legal, ethical, and patient privacy issues challenge sharing of datasets between different centers. To overcome these challenges, we developed a federated learning (FL) framework for multi-institutional PET/CT image segmentation. METHODS: A dataset consisting of 328 FL (HN) cancer patients who underwent clinical PET/CT examinations gathered from six different centers was enrolled. A pure transformer network was implemented as fully core segmentation algorithms using dual channel PET/CT images. We evaluated different frameworks (single center-based, centralized baseline, as well as seven different FL algorithms) using 68 PET/CT images (20% of each center data). In particular, the implemented FL algorithms include clipping with the quantile estimator (ClQu), zeroing with the quantile estimator (ZeQu), federated averaging (FedAvg), lossy compression (LoCo), robust aggregation (RoAg), secure aggregation (SeAg), and Gaussian differentially private FedAvg with adaptive quantile clipping (GDP-AQuCl). RESULTS: The Dice coefficient was 0.80±0.11 for both centralized and SeAg FL algorithms. All FL approaches achieved centralized learning model performance with no statistically significant differences. Among the FL algorithms, SeAg and GDP-AQuCl performed better than the other techniques. However, there was no statistically significant difference. All algorithms, except the center-based approach, resulted in relative errors less than 5% for SUVmax and SUVmean for all FL and centralized methods. Centralized and FL algorithms significantly outperformed the single center-based baseline. CONCLUSIONS: The developed FL-based (with centralized method performance) algorithms exhibited promising performance for HN tumor segmentation from PET/CT images.


Asunto(s)
Aprendizaje Profundo , Neoplasias , Humanos , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias/diagnóstico por imagen , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos
5.
Eur J Nucl Med Mol Imaging ; 50(4): 1034-1050, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36508026

RESUMEN

PURPOSE: Attenuation correction and scatter compensation (AC/SC) are two main steps toward quantitative PET imaging, which remain challenging in PET-only and PET/MRI systems. These can be effectively tackled via deep learning (DL) methods. However, trustworthy, and generalizable DL models commonly require well-curated, heterogeneous, and large datasets from multiple clinical centers. At the same time, owing to legal/ethical issues and privacy concerns, forming a large collective, centralized dataset poses significant challenges. In this work, we aimed to develop a DL-based model in a multicenter setting without direct sharing of data using federated learning (FL) for AC/SC of PET images. METHODS: Non-attenuation/scatter corrected and CT-based attenuation/scatter corrected (CT-ASC) 18F-FDG PET images of 300 patients were enrolled in this study. The dataset consisted of 6 different centers, each with 50 patients, with scanner, image acquisition, and reconstruction protocols varying across the centers. CT-based ASC PET images served as the standard reference. All images were reviewed to include high-quality and artifact-free PET images. Both corrected and uncorrected PET images were converted to standardized uptake values (SUVs). We used a modified nested U-Net utilizing residual U-block in a U-shape architecture. We evaluated two FL models, namely sequential (FL-SQ) and parallel (FL-PL) and compared their performance with the baseline centralized (CZ) learning model wherein the data were pooled to one server, as well as center-based (CB) models where for each center the model was built and evaluated separately. Data from each center were divided to contribute to training (30 patients), validation (10 patients), and test sets (10 patients). Final evaluations and reports were performed on 60 patients (10 patients from each center). RESULTS: In terms of percent SUV absolute relative error (ARE%), both FL-SQ (CI:12.21-14.81%) and FL-PL (CI:11.82-13.84%) models demonstrated excellent agreement with the centralized framework (CI:10.32-12.00%), while FL-based algorithms improved model performance by over 11% compared to CB training strategy (CI: 22.34-26.10%). Furthermore, the Mann-Whitney test between different strategies revealed no significant differences between CZ and FL-based algorithms (p-value > 0.05) in center-categorized mode. At the same time, a significant difference was observed between the different training approaches on the overall dataset (p-value < 0.05). In addition, voxel-wise comparison, with respect to reference CT-ASC, exhibited similar performance for images predicted by CZ (R2 = 0.94), FL-SQ (R2 = 0.93), and FL-PL (R2 = 0.92), while CB model achieved a far lower coefficient of determination (R2 = 0.74). Despite the strong correlations between CZ and FL-based methods compared to reference CT-ASC, a slight underestimation of predicted voxel values was observed. CONCLUSION: Deep learning-based models provide promising results toward quantitative PET image reconstruction. Specifically, we developed two FL models and compared their performance with center-based and centralized models. The proposed FL-based models achieved higher performance compared to center-based models, comparable with centralized models. Our work provided strong empirical evidence that the FL framework can fully benefit from the generalizability and robustness of DL models used for AC/SC in PET, while obviating the need for the direct sharing of datasets between clinical imaging centers.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Tomografía de Emisión de Positrones , Tomografía de Emisión de Positrones/métodos , Imagen por Resonancia Magnética/métodos
6.
Clin Nucl Med ; 47(7): 606-617, 2022 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-35442222

RESUMEN

PURPOSE: The generalizability and trustworthiness of deep learning (DL)-based algorithms depend on the size and heterogeneity of training datasets. However, because of patient privacy concerns and ethical and legal issues, sharing medical images between different centers is restricted. Our objective is to build a federated DL-based framework for PET image segmentation utilizing a multicentric dataset and to compare its performance with the centralized DL approach. METHODS: PET images from 405 head and neck cancer patients from 9 different centers formed the basis of this study. All tumors were segmented manually. PET images converted to SUV maps were resampled to isotropic voxels (3 × 3 × 3 mm3) and then normalized. PET image subvolumes (12 × 12 × 12 cm3) consisting of whole tumors and background were analyzed. Data from each center were divided into train/validation (80% of patients) and test sets (20% of patients). The modified R2U-Net was used as core DL model. A parallel federated DL model was developed and compared with the centralized approach where the data sets are pooled to one server. Segmentation metrics, including Dice similarity and Jaccard coefficients, percent relative errors (RE%) of SUVpeak, SUVmean, SUVmedian, SUVmax, metabolic tumor volume, and total lesion glycolysis were computed and compared with manual delineations. RESULTS: The performance of the centralized versus federated DL methods was nearly identical for segmentation metrics: Dice (0.84 ± 0.06 vs 0.84 ± 0.05) and Jaccard (0.73 ± 0.08 vs 0.73 ± 0.07). For quantitative PET parameters, we obtained comparable RE% for SUVmean (6.43% ± 4.72% vs 6.61% ± 5.42%), metabolic tumor volume (12.2% ± 16.2% vs 12.1% ± 15.89%), and total lesion glycolysis (6.93% ± 9.6% vs 7.07% ± 9.85%) and negligible RE% for SUVmax and SUVpeak. No significant differences in performance (P > 0.05) between the 2 frameworks (centralized vs federated) were observed. CONCLUSION: The developed federated DL model achieved comparable quantitative performance with respect to the centralized DL model. Federated DL models could provide robust and generalizable segmentation, while addressing patient privacy and legal and ethical issues in clinical data sharing.


Asunto(s)
Aprendizaje Profundo , Neoplasias de Cabeza y Cuello , Algoritmos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía de Emisión de Positrones
7.
Med Phys ; 48(9): 5059-5071, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-34174787

RESUMEN

PURPOSE: We assess the performance of a recurrent frame generation algorithm for prediction of late frames from initial frames in dynamic brain PET imaging. METHODS: Clinical dynamic 18 F-DOPA brain PET/CT studies of 46 subjects with ten folds cross-validation were retrospectively employed. A novel stochastic adversarial video prediction model was implemented to predict the last 13 frames (25-90 minutes) from the initial 13 frames (0-25 minutes). The quantitative analysis of the predicted dynamic PET frames was performed for the test and validation dataset using established metrics. RESULTS: The predicted dynamic images demonstrated that the model is capable of predicting the trend of change in time-varying tracer biodistribution. The Bland-Altman plots reported the lowest tracer uptake bias (-0.04) for the putamen region and the smallest variance (95% CI: -0.38, +0.14) for the cerebellum. The region-wise Patlak graphical analysis in the caudate and putamen regions for eight subjects from the test and validation dataset showed that the average bias for K i and distribution volume was 4.3%, 5.1% and 4.4%, 4.2%, (P-value <0.05), respectively. CONCLUSION: We have developed a novel deep learning approach for fast dynamic brain PET imaging capable of generating the last 65 minutes time frames from the initial 25 minutes frames, thus enabling significant reduction in scanning time.


Asunto(s)
Tomografía Computarizada por Tomografía de Emisión de Positrones , Tomografía de Emisión de Positrones , Encéfalo/diagnóstico por imagen , Humanos , Estudios Retrospectivos , Distribución Tisular
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...