Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Eur Radiol Exp ; 8(1): 53, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38689178

RESUMO

BACKGROUND: To compare denoising diffusion probabilistic models (DDPM) and generative adversarial networks (GAN) for recovering contrast-enhanced breast magnetic resonance imaging (MRI) subtraction images from virtual low-dose subtraction images. METHODS: Retrospective, ethically approved study. DDPM- and GAN-reconstructed single-slice subtraction images of 50 breasts with enhancing lesions were compared to original ones at three dose levels (25%, 10%, 5%) using quantitative measures and radiologic evaluations. Two radiologists stated their preference based on the reconstruction quality and scored the lesion conspicuity as compared to the original, blinded to the model. Fifty lesion-free maximum intensity projections were evaluated for the presence of false-positives. Results were compared between models and dose levels, using generalized linear mixed models. RESULTS: At 5% dose, both radiologists preferred the GAN-generated images, whereas at 25% dose, both radiologists preferred the DDPM-generated images. Median lesion conspicuity scores did not differ between GAN and DDPM at 25% dose (5 versus 5, p = 1.000) and 10% dose (4 versus 4, p = 1.000). At 5% dose, both readers assigned higher conspicuity to the GAN than to the DDPM (3 versus 2, p = 0.007). In the lesion-free examinations, DDPM and GAN showed no differences in the false-positive rate at 5% (15% versus 22%), 10% (10% versus 6%), and 25% (6% versus 4%) (p = 1.000). CONCLUSIONS: Both GAN and DDPM yielded promising results in low-dose image reconstruction. However, neither of them showed superior results over the other model for all dose levels and evaluation metrics. Further development is needed to counteract false-positives. RELEVANCE STATEMENT: For MRI-based breast cancer screening, reducing the contrast agent dose is desirable. Diffusion probabilistic models and generative adversarial networks were capable of retrospectively enhancing the signal of low-dose images. Hence, they may supplement imaging with reduced doses in the future. KEY POINTS: • Deep learning may help recover signal in low-dose contrast-enhanced breast MRI. • Two models (DDPM and GAN) were trained at different dose levels. • Radiologists preferred DDPM at 25%, and GAN images at 5% dose. • Lesion conspicuity between DDPM and GAN was similar, except at 5% dose. • GAN and DDPM yield promising results in low-dose image reconstruction.


Assuntos
Neoplasias da Mama , Meios de Contraste , Imageamento por Ressonância Magnética , Humanos , Feminino , Estudos Retrospectivos , Meios de Contraste/administração & dosagem , Neoplasias da Mama/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Pessoa de Meia-Idade , Modelos Estatísticos , Adulto , Idoso
2.
Comput Biol Med ; 175: 108410, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38678938

RESUMO

Latent diffusion models (LDMs) have emerged as a state-of-the-art image generation method, outperforming previous Generative Adversarial Networks (GANs) in terms of training stability and image quality. In computational pathology, generative models are valuable for data sharing and data augmentation. However, the impact of LDM-generated images on histopathology tasks compared to traditional GANs has not been systematically studied. We trained three LDMs and a styleGAN2 model on histology tiles from nine colorectal cancer (CRC) tissue classes. The LDMs include 1) a fine-tuned version of stable diffusion v1.4, 2) a Kullback-Leibler (KL)-autoencoder (KLF8-DM), and 3) a vector quantized (VQ)-autoencoder deploying LDM (VQF8-DM). We assessed image quality through expert ratings, dimensional reduction methods, distribution similarity measures, and their impact on training a multiclass tissue classifier. Additionally, we investigated image memorization in the KLF8-DM and styleGAN2 models. All models provided a high image quality, with the KLF8-DM achieving the best Frechet Inception Distance (FID) and expert rating scores for complex tissue classes. For simpler classes, the VQF8-DM and styleGAN2 models performed better. Image memorization was negligible for both styleGAN2 and KLF8-DM models. Classifiers trained on a mix of KLF8-DM generated and real images achieved a 4% improvement in overall classification accuracy, highlighting the usefulness of these images for dataset augmentation. Our systematic study of generative methods showed that KLF8-DM produces the highest quality images with negligible image memorization. The higher classifier performance in the generatively augmented dataset suggests that this augmentation technique can be employed to enhance histopathology classifiers for various tasks.


Assuntos
Neoplasias Colorretais , Humanos , Neoplasias Colorretais/patologia , Neoplasias Colorretais/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos
3.
Diagnostics (Basel) ; 14(5)2024 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-38472955

RESUMO

Increased attention has been given to MRI in radiation-free screening for malignant nodules in recent years. Our objective was to compare the performance of human readers and radiomic feature analysis based on stand-alone and complementary CT and MRI imaging in classifying pulmonary nodules. This single-center study comprises patients with CT findings of pulmonary nodules who underwent additional lung MRI and whose nodules were classified as benign/malignant by resection. For radiomic features analysis, 2D segmentation was performed for each lung nodule on axial CT, T2-weighted (T2w), and diffusion (DWI) images. The 105 extracted features were reduced by iterative backward selection. The performance of radiomics and human readers was compared by calculating accuracy with Clopper-Pearson confidence intervals. Fifty patients (mean age 63 +/- 10 years) with 66 pulmonary nodules (40 malignant) were evaluated. ACC values for radiomic features analysis vs. radiologists based on CT alone (0.68; 95%CI: 0.56, 0.79 vs. 0.59; 95%CI: 0.46, 0.71), T2w alone (0.65; 95%CI: 0.52, 0.77 vs. 0.68; 95%CI: 0.54, 0.78), DWI alone (0.61; 95%CI:0.48, 0.72 vs. 0.73; 95%CI: 0.60, 0.83), combined T2w/DWI (0.73; 95%CI: 0.60, 0.83 vs. 0.70; 95%CI: 0.57, 0.80), and combined CT/T2w/DWI (0.83; 95%CI: 0.72, 0.91 vs. 0.64; 95%CI: 0.51, 0.75) were calculated. This study is the first to show that by combining quantitative image information from CT, T2w, and DWI datasets, pulmonary nodule assessment through radiomics analysis is superior to using one modality alone, even exceeding human readers' performance.

4.
Eur Radiol ; 34(2): 1176-1178, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37580599
5.
J Pathol ; 262(3): 310-319, 2024 03.
Artigo em Inglês | MEDLINE | ID: mdl-38098169

RESUMO

Deep learning applied to whole-slide histopathology images (WSIs) has the potential to enhance precision oncology and alleviate the workload of experts. However, developing these models necessitates large amounts of data with ground truth labels, which can be both time-consuming and expensive to obtain. Pathology reports are typically unstructured or poorly structured texts, and efforts to implement structured reporting templates have been unsuccessful, as these efforts lead to perceived extra workload. In this study, we hypothesised that large language models (LLMs), such as the generative pre-trained transformer 4 (GPT-4), can extract structured data from unstructured plain language reports using a zero-shot approach without requiring any re-training. We tested this hypothesis by utilising GPT-4 to extract information from histopathological reports, focusing on two extensive sets of pathology reports for colorectal cancer and glioblastoma. We found a high concordance between LLM-generated structured data and human-generated structured data. Consequently, LLMs could potentially be employed routinely to extract ground truth data for machine learning from unstructured pathology reports in the future. © 2023 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.


Assuntos
Glioblastoma , Medicina de Precisão , Humanos , Aprendizado de Máquina , Reino Unido
6.
Med Image Anal ; 92: 103059, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38104402

RESUMO

Artificial intelligence (AI) has a multitude of applications in cancer research and oncology. However, the training of AI systems is impeded by the limited availability of large datasets due to data protection requirements and other regulatory obstacles. Federated and swarm learning represent possible solutions to this problem by collaboratively training AI models while avoiding data transfer. However, in these decentralized methods, weight updates are still transferred to the aggregation server for merging the models. This leaves the possibility for a breach of data privacy, for example by model inversion or membership inference attacks by untrusted servers. Somewhat-homomorphically-encrypted federated learning (SHEFL) is a solution to this problem because only encrypted weights are transferred, and model updates are performed in the encrypted space. Here, we demonstrate the first successful implementation of SHEFL in a range of clinically relevant tasks in cancer image analysis on multicentric datasets in radiology and histopathology. We show that SHEFL enables the training of AI models which outperform locally trained models and perform on par with models which are centrally trained. In the future, SHEFL can enable multiple institutions to co-train AI models without forsaking data governance and without ever transmitting any decryptable data to untrusted servers.


Assuntos
Neoplasias , Radiologia , Humanos , Inteligência Artificial , Aprendizagem , Neoplasias/diagnóstico por imagem , Processamento de Imagem Assistida por Computador
7.
Radiology ; 309(1): e230806, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37787671

RESUMO

Background Clinicians consider both imaging and nonimaging data when diagnosing diseases; however, current machine learning approaches primarily consider data from a single modality. Purpose To develop a neural network architecture capable of integrating multimodal patient data and compare its performance to models incorporating a single modality for diagnosing up to 25 pathologic conditions. Materials and Methods In this retrospective study, imaging and nonimaging patient data were extracted from the Medical Information Mart for Intensive Care (MIMIC) database and an internal database comprised of chest radiographs and clinical parameters inpatients in the intensive care unit (ICU) (January 2008 to December 2020). The MIMIC and internal data sets were each split into training (n = 33 893, n = 28 809), validation (n = 740, n = 7203), and test (n = 1909, n = 9004) sets. A novel transformer-based neural network architecture was trained to diagnose up to 25 conditions using nonimaging data alone, imaging data alone, or multimodal data. Diagnostic performance was assessed using area under the receiver operating characteristic curve (AUC) analysis. Results The MIMIC and internal data sets included 36 542 patients (mean age, 63 years ± 17 [SD]; 20 567 male patients) and 45 016 patients (mean age, 66 years ± 16; 27 577 male patients), respectively. The multimodal model showed improved diagnostic performance for all pathologic conditions. For the MIMIC data set, the mean AUC was 0.77 (95% CI: 0.77, 0.78) when both chest radiographs and clinical parameters were used, compared with 0.70 (95% CI: 0.69, 0.71; P < .001) for only chest radiographs and 0.72 (95% CI: 0.72, 0.73; P < .001) for only clinical parameters. These findings were confirmed on the internal data set. Conclusion A model trained on imaging and nonimaging data outperformed models trained on only one type of data for diagnosing multiple diseases in patients in an ICU setting. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Kitamura and Topol in this issue.


Assuntos
Aprendizado Profundo , Humanos , Masculino , Pessoa de Meia-Idade , Idoso , Estudos Retrospectivos , Radiografia , Bases de Dados Factuais , Pacientes Internados
8.
Sci Rep ; 13(1): 14207, 2023 08 30.
Artigo em Inglês | MEDLINE | ID: mdl-37648728

RESUMO

Accurate and automatic segmentation of fibroglandular tissue in breast MRI screening is essential for the quantification of breast density and background parenchymal enhancement. In this retrospective study, we developed and evaluated a transformer-based neural network for breast segmentation (TraBS) in multi-institutional MRI data, and compared its performance to the well established convolutional neural network nnUNet. TraBS and nnUNet were trained and tested on 200 internal and 40 external breast MRI examinations using manual segmentations generated by experienced human readers. Segmentation performance was assessed in terms of the Dice score and the average symmetric surface distance. The Dice score for nnUNet was lower than for TraBS on the internal testset (0.909 ± 0.069 versus 0.916 ± 0.067, P < 0.001) and on the external testset (0.824 ± 0.144 versus 0.864 ± 0.081, P = 0.004). Moreover, the average symmetric surface distance was higher (= worse) for nnUNet than for TraBS on the internal (0.657 ± 2.856 versus 0.548 ± 2.195, P = 0.001) and on the external testset (0.727 ± 0.620 versus 0.584 ± 0.413, P = 0.03). Our study demonstrates that transformer-based networks improve the quality of fibroglandular tissue segmentation in breast MRI compared to convolutional-based models like nnUNet. These findings might help to enhance the accuracy of breast density and parenchymal enhancement quantification in breast MRI screening.


Assuntos
Densidade da Mama , Imageamento por Ressonância Magnética , Humanos , Estudos Retrospectivos , Radiografia , Fontes de Energia Elétrica
9.
Sci Rep ; 13(1): 10666, 2023 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-37393383

RESUMO

When clinicians assess the prognosis of patients in intensive care, they take imaging and non-imaging data into account. In contrast, many traditional machine learning models rely on only one of these modalities, limiting their potential in medical applications. This work proposes and evaluates a transformer-based neural network as a novel AI architecture that integrates multimodal patient data, i.e., imaging data (chest radiographs) and non-imaging data (clinical data). We evaluate the performance of our model in a retrospective study with 6,125 patients in intensive care. We show that the combined model (area under the receiver operating characteristic curve [AUROC] of 0.863) is superior to the radiographs-only model (AUROC = 0.811, p < 0.001) and the clinical data-only model (AUROC = 0.785, p < 0.001) when tasked with predicting in-hospital survival per patient. Furthermore, we demonstrate that our proposed model is robust in cases where not all (clinical) data points are available.


Assuntos
Cuidados Críticos , Diagnóstico por Imagem , Humanos , Estudos Retrospectivos , Área Sob a Curva , Fontes de Energia Elétrica
10.
Sci Rep ; 13(1): 12098, 2023 07 26.
Artigo em Inglês | MEDLINE | ID: mdl-37495660

RESUMO

Although generative adversarial networks (GANs) can produce large datasets, their limited diversity and fidelity have been recently addressed by denoising diffusion probabilistic models, which have demonstrated superiority in natural image synthesis. In this study, we introduce Medfusion, a conditional latent DDPM designed for medical image generation, and evaluate its performance against GANs, which currently represent the state-of-the-art. Medfusion was trained and compared with StyleGAN-3 using fundoscopy images from the AIROGS dataset, radiographs from the CheXpert dataset, and histopathology images from the CRCDX dataset. Based on previous studies, Progressively Growing GAN (ProGAN) and Conditional GAN (cGAN) were used as additional baselines on the CheXpert and CRCDX datasets, respectively. Medfusion exceeded GANs in terms of diversity (recall), achieving better scores of 0.40 compared to 0.19 in the AIROGS dataset, 0.41 compared to 0.02 (cGAN) and 0.24 (StyleGAN-3) in the CRMDX dataset, and 0.32 compared to 0.17 (ProGAN) and 0.08 (StyleGAN-3) in the CheXpert dataset. Furthermore, Medfusion exhibited equal or higher fidelity (precision) across all three datasets. Our study shows that Medfusion constitutes a promising alternative to GAN-based models for generating high-quality medical images, leading to improved diversity and less artifacts in the generated images.


Assuntos
Artefatos , Rememoração Mental , Difusão , Modelos Estatísticos , Oftalmoscopia , Processamento de Imagem Assistida por Computador
11.
Sci Rep ; 13(1): 7303, 2023 05 05.
Artigo em Inglês | MEDLINE | ID: mdl-37147413

RESUMO

Recent advances in computer vision have shown promising results in image generation. Diffusion probabilistic models have generated realistic images from textual input, as demonstrated by DALL-E 2, Imagen, and Stable Diffusion. However, their use in medicine, where imaging data typically comprises three-dimensional volumes, has not been systematically evaluated. Synthetic images may play a crucial role in privacy-preserving artificial intelligence and can also be used to augment small datasets. We show that diffusion probabilistic models can synthesize high-quality medical data for magnetic resonance imaging (MRI) and computed tomography (CT). For quantitative evaluation, two radiologists rated the quality of the synthesized images regarding "realistic image appearance", "anatomical correctness", and "consistency between slices". Furthermore, we demonstrate that synthetic images can be used in self-supervised pre-training and improve the performance of breast segmentation models when data is scarce (Dice scores, 0.91 [without synthetic data], 0.95 [with synthetic data]).


Assuntos
Inteligência Artificial , Imageamento Tridimensional , Imageamento por Ressonância Magnética , Tomografia Computadorizada por Raios X , Modelos Estatísticos , Processamento de Imagem Assistida por Computador/métodos
12.
Comput Methods Programs Biomed ; 234: 107505, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37003043

RESUMO

BACKGROUND AND OBJECTIVES: Bedside chest radiographs (CXRs) are challenging to interpret but important for monitoring cardiothoracic disease and invasive therapy devices in critical care and emergency medicine. Taking surrounding anatomy into account is likely to improve the diagnostic accuracy of artificial intelligence and bring its performance closer to that of a radiologist. Therefore, we aimed to develop a deep convolutional neural network for efficient automatic anatomy segmentation of bedside CXRs. METHODS: To improve the efficiency of the segmentation process, we introduced a "human-in-the-loop" segmentation workflow with an active learning approach, looking at five major anatomical structures in the chest (heart, lungs, mediastinum, trachea, and clavicles). This allowed us to decrease the time needed for segmentation by 32% and select the most complex cases to utilize human expert annotators efficiently. After annotation of 2,000 CXRs from different Level 1 medical centers at Charité - University Hospital Berlin, there was no relevant improvement in model performance, and the annotation process was stopped. A 5-layer U-ResNet was trained for 150 epochs using a combined soft Dice similarity coefficient (DSC) and cross-entropy as a loss function. DSC, Jaccard index (JI), Hausdorff distance (HD) in mm, and average symmetric surface distance (ASSD) in mm were used to assess model performance. External validation was performed using an independent external test dataset from Aachen University Hospital (n = 20). RESULTS: The final training, validation, and testing dataset consisted of 1900/50/50 segmentation masks for each anatomical structure. Our model achieved a mean DSC/JI/HD/ASSD of 0.93/0.88/32.1/5.8 for the lung, 0.92/0.86/21.65/4.85 for the mediastinum, 0.91/0.84/11.83/1.35 for the clavicles, 0.9/0.85/9.6/2.19 for the trachea, and 0.88/0.8/31.74/8.73 for the heart. Validation using the external dataset showed an overall robust performance of our algorithm. CONCLUSIONS: Using an efficient computer-aided segmentation method with active learning, our anatomy-based model achieves comparable performance to state-of-the-art approaches. Instead of only segmenting the non-overlapping portions of the organs, as previous studies did, a closer approximation to actual anatomy is achieved by segmenting along the natural anatomical borders. This novel anatomy approach could be useful for developing pathology models for accurate and quantifiable diagnosis.


Assuntos
Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador/métodos , Inteligência Artificial , Redes Neurais de Computação , Tórax
13.
Radiology ; 307(3): e222211, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36943080

RESUMO

Background Reducing the amount of contrast agent needed for contrast-enhanced breast MRI is desirable. Purpose To investigate if generative adversarial networks (GANs) can recover contrast-enhanced breast MRI scans from unenhanced images and virtual low-contrast-enhanced images. Materials and Methods In this retrospective study of breast MRI performed from January 2010 to December 2019, simulated low-contrast images were produced by adding virtual noise to the existing contrast-enhanced images. GANs were then trained to recover the contrast-enhanced images from the simulated low-contrast images (approach A) or from the unenhanced T1- and T2-weighted images (approach B). Two experienced radiologists were tasked with distinguishing between real and synthesized contrast-enhanced images using both approaches. Image appearance and conspicuity of enhancing lesions on the real versus synthesized contrast-enhanced images were independently compared and rated on a five-point Likert scale. P values were calculated by using bootstrapping. Results A total of 9751 breast MRI examinations from 5086 patients (mean age, 56 years ± 10 [SD]) were included. Readers who were blinded to the nature of the images could not distinguish real from synthetic contrast-enhanced images (average accuracy of differentiation: approach A, 52 of 100; approach B, 61 of 100). The test set included images with and without enhancing lesions (29 enhancing masses and 21 nonmass enhancement; 50 total). When readers who were not blinded compared the appearance of the real versus synthetic contrast-enhanced images side by side, approach A image ratings were significantly higher than those of approach B (mean rating, 4.6 ± 0.1 vs 3.0 ± 0.2; P < .001), with the noninferiority margin met by synthetic images from approach A (P < .001) but not B (P > .99). Conclusion Generative adversarial networks may be useful to enable breast MRI with reduced contrast agent dose. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Bahl in this issue.


Assuntos
Meios de Contraste , Imageamento por Ressonância Magnética , Humanos , Pessoa de Meia-Idade , Estudos Retrospectivos , Imageamento por Ressonância Magnética/métodos , Mama , Aprendizado de Máquina
14.
Radiology ; 307(1): e220510, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36472534

RESUMO

Background Supine chest radiography for bedridden patients in intensive care units (ICUs) is one of the most frequently ordered imaging studies worldwide. Purpose To evaluate the diagnostic performance of a neural network-based model that is trained on structured semiquantitative radiologic reports of bedside chest radiographs. Materials and Methods For this retrospective single-center study, children and adults in the ICU of a university hospital who had been imaged using bedside chest radiography from January 2009 to December 2020 were reported by using a structured and itemized template. Ninety-eight radiologists rated the radiographs semiquantitatively for the severity of disease patterns. These data were used to train a neural network to identify cardiomegaly, pulmonary congestion, pleural effusion, pulmonary opacities, and atelectasis. A held-out internal test set (100 radiographs from 100 patients) that was assessed independently by an expert panel of six radiologists provided the ground truth. Individual assessments by each of these six radiologists, by two nonradiologist physicians in the ICU, and by the neural network were compared with the ground truth. Separately, the nonradiologist physicians assessed the images without and with preliminary readings provided by the neural network. The weighted Cohen κ coefficient was used to measure agreement between the readers and the ground truth. Results A total of 193 566 radiographs in 45 016 patients (mean age, 66 years ± 16 [SD]; 61% men) were included and divided into training (n = 122 294; 64%), validation (n = 31 243; 16%), and test (n = 40 029; 20%) sets. The neural network exhibited higher agreement with a majority vote of the expert panel (κ = 0.86) than each individual radiologist compared with the majority vote of the expert panel (κ = 0.81 to ≤0.84). When the neural network provided preliminary readings, the reports of the nonradiologist physicians improved considerably (aided vs unaided, κ = 0.87 vs 0.79, respectively; P < .001). Conclusion A neural network trained with structured semiquantitative bedside chest radiography reports allowed nonradiologist physicians improved interpretations compared with the consensus reading of expert radiologists. © RSNA, 2022 Supplemental material is available for this article. See also the editorial by Wielpütz in this issue.


Assuntos
Inteligência Artificial , Radiografia Torácica , Masculino , Adulto , Criança , Humanos , Idoso , Feminino , Estudos Retrospectivos , Radiografia Torácica/métodos , Pulmão , Radiografia
15.
J Hepatobiliary Pancreat Sci ; 30(5): 602-614, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36196525

RESUMO

BACKGROUND/PURPOSE: The primary cause of mortality in colorectal cancer is metastatic disease. We investigated the ability of a machine learning (ML) algorithm to stratify overall survival (OS) of patients undergoing curative resection for colorectal liver metastases (CRLM). METHODS: Patients undergoing curative liver resection for CRLM between 2010-2021 at the University Hospital RWTH Aachen were eligible for this retrospective study. Patients with recurrent metastases, incomplete resections, or early deaths, were excluded. A gradient-boosted decision tree (GBDT) model identified patients at risk of poor OS, based on clinicopathological characteristics. Differences in survival were compared with Kaplan-Meier analysis and the log-rank test. RESULTS: A total of 487 patients were split into training (n = 389, 80%) and test cohorts (n = 98, 20%). Of the latter, 20 (20%) were identified by the GBDT model as high-risk and showed significantly reduced OS (23 months vs 52 months, P = .005) and increased hazard ratio (2.434, 95%CI 1.280-4.627, P = .007). The strongest predictors were preoperative serum carcinoembryonic antigen (CEA), age, diameter of the largest metastasis, number of metastases, body mass index, and primary tumor grading. CONCLUSION: A GBDT model can identify high-risk patients regarding OS after curative resection of CRLM. Closer follow-up and aggressive systemic treatment strategies may be beneficial to these patients.


Assuntos
Neoplasias Colorretais , Neoplasias Hepáticas , Humanos , Estudos Retrospectivos , Neoplasias Colorretais/patologia , Antígeno Carcinoembrionário , Neoplasias Hepáticas/secundário , Hepatectomia , Prognóstico
16.
Diagnostics (Basel) ; 12(3)2022 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-35328240

RESUMO

For T2 mapping, the underlying mono-exponential signal decay is traditionally quantified by non-linear Least-Squares Estimation (LSE) curve fitting, which is prone to outliers and computationally expensive. This study aimed to validate a fully connected neural network (NN) to estimate T2 relaxation times and to assess its performance versus LSE fitting methods. To this end, the NN was trained and tested in silico on a synthetic dataset of 75 million signal decays. Its quantification error was comparatively evaluated against three LSE methods, i.e., traditional methods without any modification, with an offset, and one with noise correction. Following in-situ acquisition of T2 maps in seven human cadaveric knee joint specimens at high and low signal-to-noise ratios, the NN and LSE methods were used to estimate the T2 relaxation times of the manually segmented patellofemoral cartilage. In-silico modeling at low signal-to-noise ratio indicated significantly lower quantification error for the NN (by medians of 6−33%) than for the LSE methods (p < 0.001). These results were confirmed by the in-situ measurements (medians of 10−35%). T2 quantification by the NN took only 4 s, which was faster than the LSE methods (28−43 s). In conclusion, NNs provide fast, accurate, and robust quantification of T2 relaxation times.

17.
Diagnostics (Basel) ; 12(2)2022 Jan 19.
Artigo em Inglês | MEDLINE | ID: mdl-35204338

RESUMO

Machine learning results based on radiomic analysis are often not transferrable. A potential reason for this is the variability of radiomic features due to varying human made segmentations. Therefore, the aim of this study was to provide comprehensive inter-reader reliability analysis of radiomic features in five clinical image datasets and to assess the association of inter-reader reliability and survival prediction. In this study, we analyzed 4598 tumor segmentations in both computed tomography and magnetic resonance imaging data. We used a neural network to generate 100 additional segmentation outlines for each tumor and performed a reliability analysis of radiomic features. To prove clinical utility, we predicted patient survival based on all features and on the most reliable features. Survival prediction models for both computed tomography and magnetic resonance imaging datasets demonstrated less statistical spread and superior survival prediction when based on the most reliable features. Mean concordance indices were Cmean = 0.58 [most reliable] vs. Cmean = 0.56 [all] (p < 0.001, CT) and Cmean = 0.58 vs. Cmean = 0.57 (p = 0.23, MRI). Thus, preceding reliability analyses and selection of the most reliable radiomic features improves the underlying model's ability to predict patient survival across clinical imaging modalities and tumor entities.

19.
Diagnostics (Basel) ; 11(9)2021 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-34573991

RESUMO

Liver cirrhosis poses a major risk for the development of hepatocellular carcinoma (HCC). This retrospective study investigated to what extent radiomic features allow the prediction of emerging HCC in patients with cirrhosis in contrast-enhanced computed tomography (CECT). A total of 51 patients with liver cirrhosis and newly detected HCC lesions (n = 82) during follow-up (FU-CT) after local tumor therapy were included. These lesions were not to have been detected by the radiologist in the chronologically prior CECT (PRE-CT). For training purposes, segmentations of 22 patients with liver cirrhosis but without HCC-recurrence were added. A total of 186 areas (82 HCCs and 104 cirrhotic liver areas without HCC) were analyzed. Using univariate analysis, four independent features were identified, and a multivariate logistic regression model was trained to classify the outlined regions as "HCC probable" or "HCC improbable". In total, 60/82 (73%) of segmentations with later detected HCC and 84/104 (81%) segmentations without HCC were classified correctly (AUC of 81%, 95% CI 74-87%), yielding a sensitivity of 72% (95% CI 57-83%) and a specificity of 86% (95% CI 76-96%). In conclusion, the model predicted the occurrence of new HCCs within segmented areas with an acceptable sensitivity and specificity in cirrhotic liver tissue in CECT.

20.
Sci Rep ; 10(1): 12688, 2020 07 29.
Artigo em Inglês | MEDLINE | ID: mdl-32728098

RESUMO

Identifying image features that are robust with respect to segmentation variability is a tough challenge in radiomics. So far, this problem has mainly been tackled in test-retest analyses. In this work we analyse radiomics feature reproducibility in two phases: first with manual segmentations provided by four expert readers and second with probabilistic automated segmentations using a recently developed neural network (PHiseg). We test feature reproducibility on three publicly available datasets of lung, kidney and liver lesions. We find consistent results both over manual and automated segmentations in all three datasets and show that there are subsets of radiomic features which are robust against segmentation variability and other radiomic features which are prone to poor reproducibility under differing segmentations. By providing a detailed analysis of robustness of the most common radiomics features across several datasets, we envision that more reliable and reproducible radiomic models can be built in the future based on this work.


Assuntos
Neoplasias/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Automação , Bases de Dados Factuais , Humanos , Redes Neurais de Computação , Variações Dependentes do Observador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...