Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 38
Filtrar
Mais filtros

Bases de dados
Tipo de documento
Intervalo de ano de publicação
1.
Eur J Nucl Med Mol Imaging ; 51(6): 1516-1529, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38267686

RESUMO

PURPOSE: Accurate dosimetry is critical for ensuring the safety and efficacy of radiopharmaceutical therapies. In current clinical dosimetry practice, MIRD formalisms are widely employed. However, with the rapid advancement of deep learning (DL) algorithms, there has been an increasing interest in leveraging the calculation speed and automation capabilities for different tasks. We aimed to develop a hybrid transformer-based deep learning (DL) model that incorporates a multiple voxel S-value (MSV) approach for voxel-level dosimetry in [177Lu]Lu-DOTATATE therapy. The goal was to enhance the performance of the model to achieve accuracy levels closely aligned with Monte Carlo (MC) simulations, considered as the standard of reference. We extended our analysis to include MIRD formalisms (SSV and MSV), thereby conducting a comprehensive dosimetry study. METHODS: We used a dataset consisting of 22 patients undergoing up to 4 cycles of [177Lu]Lu-DOTATATE therapy. MC simulations were used to generate reference absorbed dose maps. In addition, MIRD formalism approaches, namely, single S-value (SSV) and MSV techniques, were performed. A UNEt TRansformer (UNETR) DL architecture was trained using five-fold cross-validation to generate MC-based dose maps. Co-registered CT images were fed into the network as input, whereas the difference between MC and MSV (MC-MSV) was set as output. DL results are then integrated to MSV to revive the MC dose maps. Finally, the dose maps generated by MSV, SSV, and DL were quantitatively compared to the MC reference at both voxel level and organ level (organs at risk and lesions). RESULTS: The DL approach showed slightly better performance (voxel relative absolute error (RAE) = 5.28 ± 1.32) compared to MSV (voxel RAE = 5.54 ± 1.4) and outperformed SSV (voxel RAE = 7.8 ± 3.02). Gamma analysis pass rates were 99.0 ± 1.2%, 98.8 ± 1.3%, and 98.7 ± 1.52% for DL, MSV, and SSV approaches, respectively. The computational time for MC was the highest (~2 days for a single-bed SPECT study) compared to MSV, SSV, and DL, whereas the DL-based approach outperformed the other approaches in terms of time efficiency (3 s for a single-bed SPECT). Organ-wise analysis showed absolute percent errors of 1.44 ± 3.05%, 1.18 ± 2.65%, and 1.15 ± 2.5% for SSV, MSV, and DL approaches, respectively, in lesion-absorbed doses. CONCLUSION: A hybrid transformer-based deep learning model was developed for fast and accurate dose map generation, outperforming the MIRD approaches, specifically in heterogenous regions. The model achieved accuracy close to MC gold standard and has potential for clinical implementation for use on large-scale datasets.


Assuntos
Octreotida , Octreotida/análogos & derivados , Compostos Organometálicos , Radiometria , Compostos Radiofarmacêuticos , Tomografia Computadorizada com Tomografia Computadorizada de Emissão de Fóton Único , Humanos , Octreotida/uso terapêutico , Compostos Organometálicos/uso terapêutico , Tomografia Computadorizada com Tomografia Computadorizada de Emissão de Fóton Único/métodos , Radiometria/métodos , Compostos Radiofarmacêuticos/uso terapêutico , Medicina de Precisão/métodos , Aprendizado Profundo , Masculino , Feminino , Método de Monte Carlo , Processamento de Imagem Assistida por Computador/métodos , Tumores Neuroendócrinos/radioterapia , Tumores Neuroendócrinos/diagnóstico por imagem
2.
Artigo em Inglês | MEDLINE | ID: mdl-38981950

RESUMO

BACKGROUND: Overall Survival (OS) and Progression-Free Survival (PFS) analyses are crucial metrics for evaluating the efficacy and impact of treatment. This study evaluated the role of clinical biomarkers and dosimetry parameters on survival outcomes of patients undergoing 90Y selective internal radiation therapy (SIRT). MATERIALS/METHODS: This preliminary and retrospective analysis included 17 patients with hepatocellular carcinoma (HCC) treated with 90Y SIRT. The patients underwent personalized treatment planning and voxel-wise dosimetry. After the procedure, the OS and PFS were evaluated. Three structures were delineated including tumoral liver (TL), normal perfused liver (NPL), and whole normal liver (WNL). 289 dose-volume constraints (DVCs) were extracted from dose-volume histograms of physical and biological effective dose (BED) maps calculated on 99mTc-MAA and 90Y SPECT/CT images. Subsequently, the DVCs and 16 clinical biomarkers were used as features for univariate and multivariate analysis. Cox proportional hazard ratio (HR) was employed for univariate analysis. HR and the concordance index (C-Index) were calculated for each feature. Using eight different strategies, a cross-combination of various models and feature selection (FS) methods was applied for multivariate analysis. The performance of each model was assessed using an averaged C-Index on a three-fold nested cross-validation framework. The Kaplan-Meier (KM) curve was employed for univariate and machine learning (ML) model performance assessment. RESULTS: The median OS was 11 months [95% CI: 8.5, 13.09], whereas the PFS was seven months [95% CI: 5.6, 10.98]. Univariate analysis demonstrated the presence of Ascites (HR: 9.2[1.8,47]) and the aim of SIRT (segmentectomy, lobectomy, palliative) (HR: 0.066 [0.0057, 0.78]), Aspartate aminotransferase (AST) level (HR:0.1 [0.012-0.86]), and MAA-Dose-V205(%)-TL (HR:8.5[1,72]) as predictors for OS. 90Y-derived parameters were associated with PFS but not with OS. MAA-Dose-V205(%)-WNL, MAA-BED-V400(%)-WNL with (HR:13 [1.5-120]) and 90Y-Dose-mean-TL, 90Y-D50-TL-Gy, 90Y-Dose-V205(%)-TL, 90Y-Dose- D50-TL-Gy, and 90Y-BED-V400(%)-TL (HR:15 [1.8-120]) were highly associated with PFS among dosimetry parameters. The highest C-index observed in multivariate analysis using ML was 0.94 ± 0.13 obtained from Variable Hunting-variable-importance (VH.VIMP) FS and Cox Proportional Hazard model predicting OS, using clinical features. However, the combination of VH. VIMP FS method with a Generalized Linear Model Network model predicting OS using Therapy strategy features outperformed the other models in terms of both C-index and stratification of KM curves (C-Index: 0.93 ± 0.14 and log-rank p-value of 0.023 for KM curve stratification). CONCLUSION: This preliminary study confirmed the role played by baseline clinical biomarkers and dosimetry parameters in predicting the treatment outcome, paving the way for the establishment of a dose-effect relationship. In addition, the feasibility of using ML along with these features was demonstrated as a helpful tool in the clinical management of patients, both prior to and following 90Y-SIRT.

3.
J Biomed Inform ; 150: 104583, 2024 02.
Artigo em Inglês | MEDLINE | ID: mdl-38191010

RESUMO

OBJECTIVE: The primary objective of our study is to address the challenge of confidentially sharing medical images across different centers. This is often a critical necessity in both clinical and research environments, yet restrictions typically exist due to privacy concerns. Our aim is to design a privacy-preserving data-sharing mechanism that allows medical images to be stored as encoded and obfuscated representations in the public domain without revealing any useful or recoverable content from the images. In tandem, we aim to provide authorized users with compact private keys that could be used to reconstruct the corresponding images. METHOD: Our approach involves utilizing a neural auto-encoder. The convolutional filter outputs are passed through sparsifying transformations to produce multiple compact codes. Each code is responsible for reconstructing different attributes of the image. The key privacy-preserving element in this process is obfuscation through the use of specific pseudo-random noise. When applied to the codes, it becomes computationally infeasible for an attacker to guess the correct representation for all the codes, thereby preserving the privacy of the images. RESULTS: The proposed framework was implemented and evaluated using chest X-ray images for different medical image analysis tasks, including classification, segmentation, and texture analysis. Additionally, we thoroughly assessed the robustness of our method against various attacks using both supervised and unsupervised algorithms. CONCLUSION: This study provides a novel, optimized, and privacy-assured data-sharing mechanism for medical images, enabling multi-party sharing in a secure manner. While we have demonstrated its effectiveness with chest X-ray images, the mechanism can be utilized in other medical images modalities as well.


Assuntos
Algoritmos , Privacidade , Disseminação de Informação
4.
Eur J Nucl Med Mol Imaging ; 50(7): 1881-1896, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36808000

RESUMO

PURPOSE: Partial volume effect (PVE) is a consequence of the limited spatial resolution of PET scanners. PVE can cause the intensity values of a particular voxel to be underestimated or overestimated due to the effect of surrounding tracer uptake. We propose a novel partial volume correction (PVC) technique to overcome the adverse effects of PVE on PET images. METHODS: Two hundred and twelve clinical brain PET scans, including 50 18F-Fluorodeoxyglucose (18F-FDG), 50 18F-Flortaucipir, 36 18F-Flutemetamol, and 76 18F-FluoroDOPA, and their corresponding T1-weighted MR images were enrolled in this study. The Iterative Yang technique was used for PVC as a reference or surrogate of the ground truth for evaluation. A cycle-consistent adversarial network (CycleGAN) was trained to directly map non-PVC PET images to PVC PET images. Quantitative analysis using various metrics, including structural similarity index (SSIM), root mean squared error (RMSE), and peak signal-to-noise ratio (PSNR), was performed. Furthermore, voxel-wise and region-wise-based correlations of activity concentration between the predicted and reference images were evaluated through joint histogram and Bland and Altman analysis. In addition, radiomic analysis was performed by calculating 20 radiomic features within 83 brain regions. Finally, a voxel-wise two-sample t-test was used to compare the predicted PVC PET images with reference PVC images for each radiotracer. RESULTS: The Bland and Altman analysis showed the largest and smallest variance for 18F-FDG (95% CI: - 0.29, + 0.33 SUV, mean = 0.02 SUV) and 18F-Flutemetamol (95% CI: - 0.26, + 0.24 SUV, mean = - 0.01 SUV), respectively. The PSNR was lowest (29.64 ± 1.13 dB) for 18F-FDG and highest (36.01 ± 3.26 dB) for 18F-Flutemetamol. The smallest and largest SSIM were achieved for 18F-FDG (0.93 ± 0.01) and 18F-Flutemetamol (0.97 ± 0.01), respectively. The average relative error for the kurtosis radiomic feature was 3.32%, 9.39%, 4.17%, and 4.55%, while it was 4.74%, 8.80%, 7.27%, and 6.81% for NGLDM_contrast feature for 18F-Flutemetamol, 18F-FluoroDOPA, 18F-FDG, and 18F-Flortaucipir, respectively. CONCLUSION: An end-to-end CycleGAN PVC method was developed and evaluated. Our model generates PVC images from the original non-PVC PET images without requiring additional anatomical information, such as MRI or CT. Our model eliminates the need for accurate registration or segmentation or PET scanner system response characterization. In addition, no assumptions regarding anatomical structure size, homogeneity, boundary, or background level are required.


Assuntos
Compostos de Anilina , Fluordesoxiglucose F18 , Humanos , Tomografia por Emissão de Pósitrons/métodos , Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
5.
Eur J Nucl Med Mol Imaging ; 50(4): 1034-1050, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36508026

RESUMO

PURPOSE: Attenuation correction and scatter compensation (AC/SC) are two main steps toward quantitative PET imaging, which remain challenging in PET-only and PET/MRI systems. These can be effectively tackled via deep learning (DL) methods. However, trustworthy, and generalizable DL models commonly require well-curated, heterogeneous, and large datasets from multiple clinical centers. At the same time, owing to legal/ethical issues and privacy concerns, forming a large collective, centralized dataset poses significant challenges. In this work, we aimed to develop a DL-based model in a multicenter setting without direct sharing of data using federated learning (FL) for AC/SC of PET images. METHODS: Non-attenuation/scatter corrected and CT-based attenuation/scatter corrected (CT-ASC) 18F-FDG PET images of 300 patients were enrolled in this study. The dataset consisted of 6 different centers, each with 50 patients, with scanner, image acquisition, and reconstruction protocols varying across the centers. CT-based ASC PET images served as the standard reference. All images were reviewed to include high-quality and artifact-free PET images. Both corrected and uncorrected PET images were converted to standardized uptake values (SUVs). We used a modified nested U-Net utilizing residual U-block in a U-shape architecture. We evaluated two FL models, namely sequential (FL-SQ) and parallel (FL-PL) and compared their performance with the baseline centralized (CZ) learning model wherein the data were pooled to one server, as well as center-based (CB) models where for each center the model was built and evaluated separately. Data from each center were divided to contribute to training (30 patients), validation (10 patients), and test sets (10 patients). Final evaluations and reports were performed on 60 patients (10 patients from each center). RESULTS: In terms of percent SUV absolute relative error (ARE%), both FL-SQ (CI:12.21-14.81%) and FL-PL (CI:11.82-13.84%) models demonstrated excellent agreement with the centralized framework (CI:10.32-12.00%), while FL-based algorithms improved model performance by over 11% compared to CB training strategy (CI: 22.34-26.10%). Furthermore, the Mann-Whitney test between different strategies revealed no significant differences between CZ and FL-based algorithms (p-value > 0.05) in center-categorized mode. At the same time, a significant difference was observed between the different training approaches on the overall dataset (p-value < 0.05). In addition, voxel-wise comparison, with respect to reference CT-ASC, exhibited similar performance for images predicted by CZ (R2 = 0.94), FL-SQ (R2 = 0.93), and FL-PL (R2 = 0.92), while CB model achieved a far lower coefficient of determination (R2 = 0.74). Despite the strong correlations between CZ and FL-based methods compared to reference CT-ASC, a slight underestimation of predicted voxel values was observed. CONCLUSION: Deep learning-based models provide promising results toward quantitative PET image reconstruction. Specifically, we developed two FL models and compared their performance with center-based and centralized models. The proposed FL-based models achieved higher performance compared to center-based models, comparable with centralized models. Our work provided strong empirical evidence that the FL framework can fully benefit from the generalizability and robustness of DL models used for AC/SC in PET, while obviating the need for the direct sharing of datasets between clinical imaging centers.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Tomografia por Emissão de Pósitrons/métodos , Imageamento por Ressonância Magnética/métodos
6.
Eur J Nucl Med Mol Imaging ; 51(1): 40-53, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37682303

RESUMO

PURPOSE: Image artefacts continue to pose challenges in clinical molecular imaging, resulting in misdiagnoses, additional radiation doses to patients and financial costs. Mismatch and halo artefacts occur frequently in gallium-68 (68Ga)-labelled compounds whole-body PET/CT imaging. Correcting for these artefacts is not straightforward and requires algorithmic developments, given that conventional techniques have failed to address them adequately. In the current study, we employed differential privacy-preserving federated transfer learning (FTL) to manage clinical data sharing and tackle privacy issues for building centre-specific models that detect and correct artefacts present in PET images. METHODS: Altogether, 1413 patients with 68Ga prostate-specific membrane antigen (PSMA)/DOTA-TATE (TOC) PET/CT scans from 3 countries, including 8 different centres, were enrolled in this study. CT-based attenuation and scatter correction (CT-ASC) was used in all centres for quantitative PET reconstruction. Prior to model training, an experienced nuclear medicine physician reviewed all images to ensure the use of high-quality, artefact-free PET images (421 patients' images). A deep neural network (modified U2Net) was trained on 80% of the artefact-free PET images to utilize centre-based (CeBa), centralized (CeZe) and the proposed differential privacy FTL frameworks. Quantitative analysis was performed in 20% of the clean data (with no artefacts) in each centre. A panel of two nuclear medicine physicians conducted qualitative assessment of image quality, diagnostic confidence and image artefacts in 128 patients with artefacts (256 images for CT-ASC and FTL-ASC). RESULTS: The three approaches investigated in this study for 68Ga-PET imaging (CeBa, CeZe and FTL) resulted in a mean absolute error (MAE) of 0.42 ± 0.21 (CI 95%: 0.38 to 0.47), 0.32 ± 0.23 (CI 95%: 0.27 to 0.37) and 0.28 ± 0.15 (CI 95%: 0.25 to 0.31), respectively. Statistical analysis using the Wilcoxon test revealed significant differences between the three approaches, with FTL outperforming CeBa and CeZe (p-value < 0.05) in the clean test set. The qualitative assessment demonstrated that FTL-ASC significantly improved image quality and diagnostic confidence and decreased image artefacts, compared to CT-ASC in 68Ga-PET imaging. In addition, mismatch and halo artefacts were successfully detected and disentangled in the chest, abdomen and pelvic regions in 68Ga-PET imaging. CONCLUSION: The proposed approach benefits from using large datasets from multiple centres while preserving patient privacy. Qualitative assessment by nuclear medicine physicians showed that the proposed model correctly addressed two main challenging artefacts in 68Ga-PET imaging. This technique could be integrated in the clinic for 68Ga-PET imaging artefact detection and disentanglement using multicentric heterogeneous datasets.


Assuntos
Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Neoplasias da Próstata , Masculino , Humanos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Artefatos , Radioisótopos de Gálio , Privacidade , Tomografia por Emissão de Pósitrons/métodos , Aprendizado de Máquina , Processamento de Imagem Assistida por Computador/métodos
7.
Eur Radiol ; 33(5): 3243-3252, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36703015

RESUMO

OBJECTIVES: This study aimed to improve patient positioning accuracy by relying on a CT localizer and a deep neural network to optimize image quality and radiation dose. METHODS: We included 5754 chest CT axial and anterior-posterior (AP) images from two different centers, C1 and C2. After pre-processing, images were split into training (80%) and test (20%) datasets. A deep neural network was trained to generate 3D axial images from the AP localizer. The geometric centerlines of patient bodies were indicated by creating a bounding box on the predicted images. The distance between the body centerline, estimated by the deep learning model and ground truth (BCAP), was compared with patient mis-centering during manual positioning (BCMP). We evaluated the performance of our model in terms of distance between the lung centerline estimated by the deep learning model and the ground truth (LCAP). RESULTS: The error in terms of BCAP was - 0.75 ± 7.73 mm and 2.06 ± 10.61 mm for C1 and C2, respectively. This error was significantly lower than BCMP, which achieved an error of 9.35 ± 14.94 and 13.98 ± 14.5 mm for C1 and C2, respectively. The absolute BCAP was 5.7 ± 5.26 and 8.26 ± 6.96 mm for C1 and C2, respectively. The LCAP metric was 1.56 ± 10.8 and -0.27 ± 16.29 mm for C1 and C2, respectively. The error in terms of BCAP and LCAP was higher for larger patients (p value < 0.01). CONCLUSION: The accuracy of the proposed method was comparable to available alternative methods, carrying the advantage of being free from errors related to objects blocking the camera visibility. KEY POINTS: • Patient mis-centering in the anterior-posterior direction (AP) is a common problem in clinical practice which can degrade image quality and increase patient radiation dose. • We proposed a deep neural network for automatic patient positioning using only the CT image localizer, achieving a performance comparable to alternative techniques, such as the external 3D visual camera. • The advantage of the proposed method is that it is free from errors related to objects blocking the camera visibility and that it could be implemented on imaging consoles as a patient positioning support tool.


Assuntos
Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Imageamento Tridimensional , Posicionamento do Paciente/métodos , Processamento de Imagem Assistida por Computador/métodos
8.
Eur Radiol ; 33(12): 9411-9424, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37368113

RESUMO

OBJECTIVE: We propose a deep learning-guided approach to generate voxel-based absorbed dose maps from whole-body CT acquisitions. METHODS: The voxel-wise dose maps corresponding to each source position/angle were calculated using Monte Carlo (MC) simulations considering patient- and scanner-specific characteristics (SP_MC). The dose distribution in a uniform cylinder was computed through MC calculations (SP_uniform). The density map and SP_uniform dose maps were fed into a residual deep neural network (DNN) to predict SP_MC through an image regression task. The whole-body dose maps reconstructed by the DNN and MC were compared in the 11 test cases scanned with two tube voltages through transfer learning with/without tube current modulation (TCM). The voxel-wise and organ-wise dose evaluations, such as mean error (ME, mGy), mean absolute error (MAE, mGy), relative error (RE, %), and relative absolute error (RAE, %), were performed. RESULTS: The model performance for the 120 kVp and TCM test set in terms of ME, MAE, RE, and RAE voxel-wise parameters was - 0.0302 ± 0.0244 mGy, 0.0854 ± 0.0279 mGy, - 1.13 ± 1.41%, and 7.17 ± 0.44%, respectively. The organ-wise errors for 120 kVp and TCM scenario averaged over all segmented organs in terms of ME, MAE, RE, and RAE were - 0.144 ± 0.342 mGy, and 0.23 ± 0.28 mGy, - 1.11 ± 2.90%, 2.34 ± 2.03%, respectively. CONCLUSION: Our proposed deep learning model is able to generate voxel-level dose maps from a whole-body CT scan with reasonable accuracy suitable for organ-level absorbed dose estimation. CLINICAL RELEVANCE STATEMENT: We proposed a novel method for voxel dose map calculation using deep neural networks. This work is clinically relevant since accurate dose calculation for patients can be carried out within acceptable computational time compared to lengthy Monte Carlo calculations. KEY POINTS: • We proposed a deep neural network approach as an alternative to Monte Carlo dose calculation. • Our proposed deep learning model is able to generate voxel-level dose maps from a whole-body CT scan with reasonable accuracy, suitable for organ-level dose estimation. • By generating a dose distribution from a single source position, our model can generate accurate and personalized dose maps for a wide range of acquisition parameters.


Assuntos
Redes Neurais de Computação , Imagem Corporal Total , Humanos , Imagens de Fantasmas , Método de Monte Carlo , Tomografia Computadorizada por Raios X , Doses de Radiação
9.
Hum Brain Mapp ; 43(16): 5032-5043, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36087092

RESUMO

We aim to synthesize brain time-of-flight (TOF) PET images/sinograms from their corresponding non-TOF information in the image space (IS) and sinogram space (SS) to increase the signal-to-noise ratio (SNR) and contrast of abnormalities, and decrease the bias in tracer uptake quantification. One hundred forty clinical brain 18 F-FDG PET/CT scans were collected to generate TOF and non-TOF sinograms. The TOF sinograms were split into seven time bins (0, ±1, ±2, ±3). The predicted TOF sinogram was reconstructed and the performance of both models (IS and SS) compared with reference TOF and non-TOF. Wide-ranging quantitative and statistical analysis metrics, including structural similarity index metric (SSIM), root mean square error (RMSE), as well as 28 radiomic features for 83 brain regions were extracted to evaluate the performance of the CycleGAN model. SSIM and RMSE of 0.99 ± 0.03, 0.98 ± 0.02 and 0.12 ± 0.09, 0.16 ± 0.04 were achieved for the generated TOF-PET images in IS and SS, respectively. They were 0.97 ± 0.03 and 0.22 ± 0.12, respectively, for non-TOF-PET images. The Bland & Altman analysis revealed that the lowest tracer uptake value bias (-0.02%) and minimum variance (95% CI: -0.17%, +0.21%) were achieved for TOF-PET images generated in IS. For malignant lesions, the contrast in the test dataset was enhanced from 3.22 ± 2.51 for non-TOF to 3.34 ± 0.41 and 3.65 ± 3.10 for TOF PET in SS and IS, respectively. The implemented CycleGAN is capable of generating TOF from non-TOF PET images to achieve better image quality.


Assuntos
Aprendizado Profundo , Fluordesoxiglucose F18 , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Encéfalo/diagnóstico por imagem
10.
J Nucl Cardiol ; 29(4): 1552-1561, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-33527332

RESUMO

BACKGROUND: Proton pump inhibitors (PPIs) have been speculated to cause gastric wall uptake (GWU) in MPI scans. However, the uptake mechanism and prevention methods are less studied. In this prospective trial we aimed to evaluate the impact of gastroprotective medications on GWU and its solutions. METHODS: 351 consecutive patients, scheduled for 2-day rest/stress 99mTc-MIBI scan, were distributed into 5 groups. 3-7 days following the baseline rest scan, the stress scan was acquired after intervention in the trial group, consisting of patients with history of PPI intake, randomly assigned to 3 subgroups: discontinuing PPIs(A), replacement with H2 blockers (B), and continuing PPIs (C). Patients receiving H2 blockers, continued it as before (D) and the remaining patients were the control group (E). GWU was graded compared to the myocardial uptake. RESULTS: In the rest phase, all groups had significantly higher GWU compared to the control group. In the stress phase, group A had less GWU than group B (P-value < 0.05) and both of them had significantly less GWU compared to group C (P-value < 0.001). There was no significant difference between PPI discontinuation periods of 3-5 days versus 5-7 days. There was a significant association between duration of oral PPI intake, but not IV PPIs, and GWU. GWU was significantly lower with oral compared to IV PPI administration. CONCLUSION: PPIs significantly increase GWU and discontinuing them for at least 3-5 days significantly reduces GWU. H2 antagonists are a good alternative in patients who cannot tolerate dyspepsia symptoms.


Assuntos
Inibidores da Bomba de Prótons , Tecnécio Tc 99m Sestamibi , Antagonistas dos Receptores H2 da Histamina/farmacologia , Antagonistas dos Receptores H2 da Histamina/uso terapêutico , Humanos , Perfusão , Estudos Prospectivos , Inibidores da Bomba de Prótons/farmacologia , Inibidores da Bomba de Prótons/uso terapêutico , Tomografia Computadorizada por Raios X
11.
J Appl Clin Med Phys ; 23(9): e13696, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35699200

RESUMO

PURPOSE: To investigate the potential benefits of FDG PET radiomic feature maps (RFMs) for target delineation in non-small cell lung cancer (NSCLC) radiotherapy. METHODS: Thirty-two NSCLC patients undergoing FDG PET/CT imaging were included. For each patient, nine grey-level co-occurrence matrix (GLCM) RFMs were generated. gross target volume (GTV) and clinical target volume (CTV) were contoured on CT (GTVCT , CTVCT ), PET (GTVPET40 , CTVPET40 ), and RFMs (GTVRFM , CTVRFM ,). Intratumoral heterogeneity areas were segmented as GTVPET50-Boost and radiomic boost target volume (RTVBoost ) on PET and RFMs, respectively. GTVCT in homogenous tumors and GTVPET40 in heterogeneous tumors were considered as GTVgold standard (GTVGS ). One-way analysis of variance was conducted to determine the threshold that finds the best conformity for GTVRFM with GTVGS . Dice similarity coefficient (DSC) and mean absolute percent error (MAPE) were calculated. Linear regression analysis was employed to report the correlations between the gold standard and RFM-derived target volumes. RESULTS: Entropy, contrast, and Haralick correlation (H-correlation) were selected for tumor segmentation. The threshold values of 80%, 50%, and 10% have the best conformity of GTVRFM-entropy , GTVRFM-contrast , and GTVRFM-H-correlation with GTVGS , respectively. The linear regression results showed a positive correlation between GTVGS and GTVRFM-entropy (r = 0.98, p < 0.001), between GTVGS and GTVRFM-contrast (r = 0.93, p < 0.001), and between GTVGS and GTVRFM-H-correlation (r = 0.91, p < 0.001). The average threshold values of 45% and 15% were resulted in the best segmentation matching between CTVRFM-entropy and CTVRFM-contrast with CTVGS , respectively. Moreover, we used RFM to determine RTVBoost in the heterogeneous tumors. Comparison of RTVBoost with GTVPET50-Boost MAPE showed the volume error differences of 31.7%, 36%, and 34.7% in RTVBoost-entropy , RTVBoost-contrast , and RTVBoost-H-correlation , respectively. CONCLUSIONS: FDG PET-based radiomics features in NSCLC demonstrated a promising potential for decision support in radiotherapy, helping radiation oncologists delineate tumors and generate accurate segmentation for heterogeneous region of tumors.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Neoplasias Pulmonares , Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Carcinoma Pulmonar de Células não Pequenas/patologia , Carcinoma Pulmonar de Células não Pequenas/radioterapia , Fluordesoxiglucose F18 , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia , Neoplasias Pulmonares/radioterapia , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Tomografia por Emissão de Pósitrons/métodos , Compostos Radiofarmacêuticos
12.
Eur Radiol ; 31(3): 1420-1431, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-32879987

RESUMO

OBJECTIVES: The current study aimed to design an ultra-low-dose CT examination protocol using a deep learning approach suitable for clinical diagnosis of COVID-19 patients. METHODS: In this study, 800, 170, and 171 pairs of ultra-low-dose and full-dose CT images were used as input/output as training, test, and external validation set, respectively, to implement the full-dose prediction technique. A residual convolutional neural network was applied to generate full-dose from ultra-low-dose CT images. The quality of predicted CT images was assessed using root mean square error (RMSE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR). Scores ranging from 1 to 5 were assigned reflecting subjective assessment of image quality and related COVID-19 features, including ground glass opacities (GGO), crazy paving (CP), consolidation (CS), nodular infiltrates (NI), bronchovascular thickening (BVT), and pleural effusion (PE). RESULTS: The radiation dose in terms of CT dose index (CTDIvol) was reduced by up to 89%. The RMSE decreased from 0.16 ± 0.05 to 0.09 ± 0.02 and from 0.16 ± 0.06 to 0.08 ± 0.02 for the predicted compared with ultra-low-dose CT images in the test and external validation set, respectively. The overall scoring assigned by radiologists showed an acceptance rate of 4.72 ± 0.57 out of 5 for reference full-dose CT images, while ultra-low-dose CT images rated 2.78 ± 0.9. The predicted CT images using the deep learning algorithm achieved a score of 4.42 ± 0.8. CONCLUSIONS: The results demonstrated that the deep learning algorithm is capable of predicting standard full-dose CT images with acceptable quality for the clinical diagnosis of COVID-19 positive patients with substantial radiation dose reduction. KEY POINTS: • Ultra-low-dose CT imaging of COVID-19 patients would result in the loss of critical information about lesion types, which could potentially affect clinical diagnosis. • Deep learning-based prediction of full-dose from ultra-low-dose CT images for the diagnosis of COVID-19 could reduce the radiation dose by up to 89%. • Deep learning algorithms failed to recover the correct lesion structure/density for a number of patients considered outliers, and as such, further research and development is warranted to address these limitations.


Assuntos
COVID-19/diagnóstico por imagem , Aprendizado Profundo , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Doses de Radiação , Reprodutibilidade dos Testes , SARS-CoV-2 , Razão Sinal-Ruído
13.
Comput Methods Programs Biomed ; 256: 108376, 2024 Aug 16.
Artigo em Inglês | MEDLINE | ID: mdl-39173481

RESUMO

BACKGROUND AND OBJECTIVE: We develop an efficient deep-learning based dual-domain reconstruction method for sparse-view CT reconstruction with small training parameters and comparable running time. We aim to investigate the model's capability and its clinical value by performing objective and subjective quality assessments using clinical CT projection data acquired on commercial scanners. METHODS: We designed two lightweight networks, namely Sino-Net and Img-Net, to restore the projection and image signal from the DD-Net reconstructed images in the projection and image domains, respectively. The proposed network has small training parameters and comparable running time among dual-domain based reconstruction networks and is easy to train (end-to-end). We prospectively collected clinical thoraco-abdominal CT projection data acquired on a Siemens Biograph 128 Edge CT scanner to train and validate the proposed network. Further, we quantitatively evaluated the CT Hounsfield unit (HU) values on 21 organs and anatomic structures, such as the liver, aorta, and ribcage. We also analyzed the noise properties and compared the signal-to-noise ratio (SNR) and the contrast-to-noise ratio (CNR) of the reconstructed images. Besides, two radiologists conducted the subjective qualitative evaluation including the confidence and conspicuity of anatomic structures, and the overall image quality using a 1-5 likert scoring system. RESULTS: Objective and subjective evaluation showed that the proposed algorithm achieves competitive results in eliminating noise and artifacts, restoring fine structure details, and recovering edges and contours of anatomic structures using 384 views (1/6 sparse rate). The proposed method exhibited good computational cost performance on clinical projection data. CONCLUSION: This work presents an efficient dual-domain learning network for sparse-view CT reconstruction on raw projection data from a commercial scanner. The study also provides insights for designing an organ-based image quality assessment pipeline for sparse-view reconstruction tasks, potentially benefiting organ-specific dose reduction by sparse-view imaging.

14.
Med Phys ; 51(6): 4095-4104, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38629779

RESUMO

BACKGROUND: Contrast-enhanced computed tomography (CECT) provides much more information compared to non-enhanced CT images, especially for the differentiation of malignancies, such as liver carcinomas. Contrast media injection phase information is usually missing on public datasets and not standardized in the clinic even in the same region and language. This is a barrier to effective use of available CECT images in clinical research. PURPOSE: The aim of this study is to detect contrast media injection phase from CT images by means of organ segmentation and machine learning algorithms. METHODS: A total number of 2509 CT images split into four subsets of non-contrast (class #0), arterial (class #1), venous (class #2), and delayed (class #3) after contrast media injection were collected from two CT scanners. Seven organs including the liver, spleen, heart, kidneys, lungs, urinary bladder, and aorta along with body contour masks were generated by pre-trained deep learning algorithms. Subsequently, five first-order statistical features including average, standard deviation, 10, 50, and 90 percentiles extracted from the above-mentioned masks were fed to machine learning models after feature selection and reduction to classify the CT images in one of four above mentioned classes. A 10-fold data split strategy was followed. The performance of our methodology was evaluated in terms of classification accuracy metrics. RESULTS: The best performance was achieved by Boruta feature selection and RF model with average area under the curve of more than 0.999 and accuracy of 0.9936 averaged over four classes and 10 folds. Boruta feature selection selected all predictor features. The lowest classification was observed for class #2 (0.9888), which is already an excellent result. In the 10-fold strategy, only 33 cases from 2509 cases (∼1.4%) were misclassified. The performance over all folds was consistent. CONCLUSIONS: We developed a fast, accurate, reliable, and explainable methodology to classify contrast media phases which may be useful in data curation and annotation in big online datasets or local datasets with non-standard or no series description. Our model containing two steps of deep learning and machine learning may help to exploit available datasets more effectively.


Assuntos
Automação , Meios de Contraste , Processamento de Imagem Assistida por Computador , Aprendizado de Máquina , Tomografia Computadorizada por Raios X , Humanos , Processamento de Imagem Assistida por Computador/métodos , Radiografia Abdominal , Abdome/diagnóstico por imagem
15.
Radiat Oncol ; 19(1): 12, 2024 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-38254203

RESUMO

BACKGROUND: This study aimed to investigate the value of clinical, radiomic features extracted from gross tumor volumes (GTVs) delineated on CT images, dose distributions (Dosiomics), and fusion of CT and dose distributions to predict outcomes in head and neck cancer (HNC) patients. METHODS: A cohort of 240 HNC patients from five different centers was obtained from The Cancer Imaging Archive. Seven strategies, including four non-fusion (Clinical, CT, Dose, DualCT-Dose), and three fusion algorithms (latent low-rank representation referred (LLRR),Wavelet, weighted least square (WLS)) were applied. The fusion algorithms were used to fuse the pre-treatment CT images and 3-dimensional dose maps. Overall, 215 radiomics and Dosiomics features were extracted from the GTVs, alongside with seven clinical features incorporated. Five feature selection (FS) methods in combination with six machine learning (ML) models were implemented. The performance of the models was quantified using the concordance index (CI) in one-center-leave-out 5-fold cross-validation for overall survival (OS) prediction considering the time-to-event. RESULTS: The mean CI and Kaplan-Meier curves were used for further comparisons. The CoxBoost ML model using the Minimal Depth (MD) FS method and the glmnet model using the Variable hunting (VH) FS method showed the best performance with CI = 0.73 ± 0.15 for features extracted from LLRR fused images. In addition, both glmnet-Cindex and Coxph-Cindex classifiers achieved a CI of 0.72 ± 0.14 by employing the dose images (+ incorporated clinical features) only. CONCLUSION: Our results demonstrated that clinical features, Dosiomics and fusion of dose and CT images by specific ML-FS models could predict the overall survival of HNC patients with acceptable accuracy. Besides, the performance of ML methods among the three different strategies was almost comparable.


Assuntos
Neoplasias de Cabeça e Pescoço , Radiômica , Humanos , Prognóstico , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Neoplasias de Cabeça e Pescoço/radioterapia , Aprendizado de Máquina , Tomografia Computadorizada por Raios X
16.
Clin Nucl Med ; 2024 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-39192505

RESUMO

PURPOSE: Non-small cell lung cancer is the most common subtype of lung cancer. Patient survival prediction using machine learning (ML) and radiomics analysis proved to provide promising outcomes. However, most studies reported in the literature focused on information extracted from malignant lesions. This study aims to explore the relevance and additional value of information extracted from healthy organs in addition to tumoral tissue using ML algorithms. PATIENTS AND METHODS: This study included PET/CT images of 154 patients collected from available online databases. The gross tumor volume and 33 volumes of interest defined on healthy organs were segmented using nnU-Net deep learning-based segmentation. Subsequently, 107 radiomic features were extracted from PET and CT images (Organomics). Clinical information was combined with PET and CT radiomics from organs and gross tumor volumes considering 19 different combinations of inputs. Finally, different feature selection (FS; 5 methods) and ML (6 algorithms) algorithms were tested in a 3-fold data split cross-validation scheme. The performance of the models was quantified in terms of the concordance index (C-index) metric. RESULTS: For an input combination of all radiomics information, most of the selected features belonged to PET Organomics and CT Organomics. The highest C-index (0.68) was achieved using univariate C-index FS method and random survival forest ML model using CT Organomics + PET Organomics as input as well as minimum depth FS method and CoxPH ML model using PET Organomics as input. Considering all 17 combinations with C-index higher than 0.65, Organomics from PET or CT images were used as input in 16 of them. CONCLUSIONS: The selected features and C-indices demonstrated that the additional information extracted from healthy organs of both PET and CT imaging modalities improved the ML performance. Organomics could be a step toward exploiting the whole information available from multimodality medical images, contributing to the emerging field of digital twins in health care.

17.
J Med Radiat Sci ; 71(2): 251-260, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38454637

RESUMO

INTRODUCTION: Concerns regarding the adverse consequences of radiation have increased due to the expanded application of computed tomography (CT) in medical practice. Certain studies have indicated that the radiation dosage depends on the anatomical region, the imaging technique employed and patient-specific variables. The aim of this study is to present fitting models for the estimation of age-specific dose estimates (ASDE), in the same direction of size-specific dose estimates, and effective doses based on patient age, gender and the type of CT examination used in paediatric head, chest and abdomen-pelvis imaging. METHODS: A total of 583 paediatric patients were included in the study. Radiometric data were gathered from DICOM files. The patients were categorised into five distinct groups (under 15 years of age), and the effective dose, organ dose and ASDE were computed for the CT examinations involving the head, chest and abdomen-pelvis. Finally, the best fitting models were presented for estimation of ASDE and effective doses based on patient age, gender and the type of examination. RESULTS: The ASDE in head, chest, and abdomen-pelvis CT examinations increases with increasing age. As age increases, the effective dose in head and abdomen-pelvis CT scans decreased. However, for chest scans, the effective dose initially showed a decreasing trend until the first year of life; after that, it increases in correlation with age. CONCLUSIONS: Based on the presented fitting model for the ASDE, these CT scan quantities depend on factors such as patient age and the type of CT examination. For the effective dose, the gender was also included in the fitting model. By utilising the information about the scan type, region and age, it becomes feasible to estimate the ASDE and effective dose using the models provided in this study.


Assuntos
Cabeça , Doses de Radiação , Tomografia Computadorizada por Raios X , Humanos , Criança , Feminino , Masculino , Adolescente , Pré-Escolar , Lactente , Cabeça/diagnóstico por imagem , Pelve/diagnóstico por imagem , Abdome/diagnóstico por imagem , Tórax/diagnóstico por imagem , Fatores Etários , Recém-Nascido , Radiografia Torácica , Radiografia Abdominal/métodos
18.
EJNMMI Phys ; 11(1): 66, 2024 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-39028439

RESUMO

BACKGROUND: Low-dose ungated CT is commonly used for total-body PET attenuation and scatter correction (ASC). However, CT-based ASC (CT-ASC) is limited by radiation dose risks of CT examinations, propagation of CT-based artifacts and potential mismatches between PET and CT. We demonstrate the feasibility of direct ASC for multi-tracer total-body PET in the image domain. METHODS: Clinical uEXPLORER total-body PET/CT datasets of [18F]FDG (N = 52), [18F]FAPI (N = 46) and [68Ga]FAPI (N = 60) were retrospectively enrolled in this study. We developed an improved 3D conditional generative adversarial network (cGAN) to directly estimate attenuation and scatter-corrected PET images from non-attenuation and scatter-corrected (NASC) PET images. The feasibility of the proposed 3D cGAN-based ASC was validated using four training strategies: (1) Paired 3D NASC and CT-ASC PET images from three tracers were pooled into one centralized server (CZ-ASC). (2) Paired 3D NASC and CT-ASC PET images from each tracer were individually used (DL-ASC). (3) Paired NASC and CT-ASC PET images from one tracer ([18F]FDG) were used to train the networks, while the other two tracers were used for testing without fine-tuning (NFT-ASC). (4) The pre-trained networks of (3) were fine-tuned with two other tracers individually (FT-ASC). We trained all networks in fivefold cross-validation. The performance of all ASC methods was evaluated by qualitative and quantitative metrics using CT-ASC as the reference. RESULTS: CZ-ASC, DL-ASC and FT-ASC showed comparable visual quality with CT-ASC for all tracers. CZ-ASC and DL-ASC resulted in a normalized mean absolute error (NMAE) of 8.51 ± 7.32% versus 7.36 ± 6.77% (p < 0.05), outperforming NASC (p < 0.0001) in [18F]FDG dataset. CZ-ASC, FT-ASC and DL-ASC led to NMAE of 6.44 ± 7.02%, 6.55 ± 5.89%, and 7.25 ± 6.33% in [18F]FAPI dataset, and NMAE of 5.53 ± 3.99%, 5.60 ± 4.02%, and 5.68 ± 4.12% in [68Ga]FAPI dataset, respectively. CZ-ASC, FT-ASC and DL-ASC were superior to NASC (p < 0.0001) and NFT-ASC (p < 0.0001) in terms of NMAE results. CONCLUSIONS: CZ-ASC, DL-ASC and FT-ASC demonstrated the feasibility of providing accurate and robust ASC for multi-tracer total-body PET, thereby reducing the radiation hazards to patients from redundant CT examinations. CZ-ASC and FT-ASC could outperform DL-ASC for cross-tracer total-body PET AC.

19.
Med Phys ; 51(1): 319-333, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37475591

RESUMO

BACKGROUND: PET/CT images combining anatomic and metabolic data provide complementary information that can improve clinical task performance. PET image segmentation algorithms exploiting the multi-modal information available are still lacking. PURPOSE: Our study aimed to assess the performance of PET and CT image fusion for gross tumor volume (GTV) segmentations of head and neck cancers (HNCs) utilizing conventional, deep learning (DL), and output-level voting-based fusions. METHODS: The current study is based on a total of 328 histologically confirmed HNCs from six different centers. The images were automatically cropped to a 200 × 200 head and neck region box, and CT and PET images were normalized for further processing. Eighteen conventional image-level fusions were implemented. In addition, a modified U2-Net architecture as DL fusion model baseline was used. Three different input, layer, and decision-level information fusions were used. Simultaneous truth and performance level estimation (STAPLE) and majority voting to merge different segmentation outputs (from PET and image-level and network-level fusions), that is, output-level information fusion (voting-based fusions) were employed. Different networks were trained in a 2D manner with a batch size of 64. Twenty percent of the dataset with stratification concerning the centers (20% in each center) were used for final result reporting. Different standard segmentation metrics and conventional PET metrics, such as SUV, were calculated. RESULTS: In single modalities, PET had a reasonable performance with a Dice score of 0.77 ± 0.09, while CT did not perform acceptably and reached a Dice score of only 0.38 ± 0.22. Conventional fusion algorithms obtained a Dice score range of [0.76-0.81] with guided-filter-based context enhancement (GFCE) at the low-end, and anisotropic diffusion and Karhunen-Loeve transform fusion (ADF), multi-resolution singular value decomposition (MSVD), and multi-level image decomposition based on latent low-rank representation (MDLatLRR) at the high-end. All DL fusion models achieved Dice scores of 0.80. Output-level voting-based models outperformed all other models, achieving superior results with a Dice score of 0.84 for Majority_ImgFus, Majority_All, and Majority_Fast. A mean error of almost zero was achieved for all fusions using SUVpeak , SUVmean and SUVmedian . CONCLUSION: PET/CT information fusion adds significant value to segmentation tasks, considerably outperforming PET-only and CT-only methods. In addition, both conventional image-level and DL fusions achieve competitive results. Meanwhile, output-level voting-based fusion using majority voting of several algorithms results in statistically significant improvements in the segmentation of HNC.


Assuntos
Neoplasias de Cabeça e Pescoço , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Humanos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Algoritmos , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
20.
Med Phys ; 51(7): 4736-4747, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38335175

RESUMO

BACKGROUND: Notwithstanding the encouraging results of previous studies reporting on the efficiency of deep learning (DL) in COVID-19 prognostication, clinical adoption of the developed methodology still needs to be improved. To overcome this limitation, we set out to predict the prognosis of a large multi-institutional cohort of patients with COVID-19 using a DL-based model. PURPOSE: This study aimed to evaluate the performance of deep privacy-preserving federated learning (DPFL) in predicting COVID-19 outcomes using chest CT images. METHODS: After applying inclusion and exclusion criteria, 3055 patients from 19 centers, including 1599 alive and 1456 deceased, were enrolled in this study. Data from all centers were split (randomly with stratification respective to each center and class) into a training/validation set (70%/10%) and a hold-out test set (20%). For the DL model, feature extraction was performed on 2D slices, and averaging was performed at the final layer to construct a 3D model for each scan. The DensNet model was used for feature extraction. The model was developed using centralized and FL approaches. For FL, we employed DPFL approaches. Membership inference attack was also evaluated in the FL strategy. For model evaluation, different metrics were reported in the hold-out test sets. In addition, models trained in two scenarios, centralized and FL, were compared using the DeLong test for statistical differences. RESULTS: The centralized model achieved an accuracy of 0.76, while the DPFL model had an accuracy of 0.75. Both the centralized and DPFL models achieved a specificity of 0.77. The centralized model achieved a sensitivity of 0.74, while the DPFL model had a sensitivity of 0.73. A mean AUC of 0.82 and 0.81 with 95% confidence intervals of (95% CI: 0.79-0.85) and (95% CI: 0.77-0.84) were achieved by the centralized model and the DPFL model, respectively. The DeLong test did not prove statistically significant differences between the two models (p-value = 0.98). The AUC values for the inference attacks fluctuate between 0.49 and 0.51, with an average of 0.50 ± 0.003 and 95% CI for the mean AUC of 0.500 to 0.501. CONCLUSION: The performance of the proposed model was comparable to centralized models while operating on large and heterogeneous multi-institutional datasets. In addition, the model was resistant to inference attacks, ensuring the privacy of shared data during the training process.


Assuntos
COVID-19 , Aprendizado Profundo , Tomografia Computadorizada por Raios X , COVID-19/diagnóstico por imagem , Humanos , Prognóstico , Masculino , Feminino , Idoso , Pessoa de Meia-Idade , Privacidade , Radiografia Torácica , Conjuntos de Dados como Assunto
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA