Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
J Magn Reson Imaging ; 2024 Jun 03.
Artículo en Inglés | MEDLINE | ID: mdl-38826142

RESUMEN

BACKGROUND: The number of focal liver lesions (FLLs) detected by imaging has increased worldwide, highlighting the need to develop a robust, objective system for automatically detecting FLLs. PURPOSE: To assess the performance of the deep learning-based artificial intelligence (AI) software in identifying and measuring lesions on contrast-enhanced magnetic resonance imaging (MRI) images in patients with FLLs. STUDY TYPE: Retrospective. SUBJECTS: 395 patients with 1149 FLLs. FIELD STRENGTH/SEQUENCE: The 1.5 T and 3 T scanners, including T1-, T2-, diffusion-weighted imaging, in/out-phase imaging, and dynamic contrast-enhanced imaging. ASSESSMENT: The diagnostic performance of AI, radiologist, and their combination was compared. Using 20 mm as the cut-off value, the lesions were divided into two groups, and then divided into four subgroups: <10, 10-20, 20-40, and ≥40 mm, to evaluate the sensitivity of radiologists and AI in the detection of lesions of different sizes. We compared the pathologic sizes of 122 surgically resected lesions with measurements obtained using AI and those made by radiologists. STATISTICAL TESTS: McNemar test, Bland-Altman analyses, Friedman test, Pearson's chi-squared test, Fisher's exact test, Dice coefficient, and intraclass correlation coefficients. A P-value <0.05 was considered statistically significant. RESULTS: The average Dice coefficient of AI in segmentation of liver lesions was 0.62. The combination of AI and radiologist outperformed the radiologist alone, with a significantly higher detection rate (0.894 vs. 0.825) and sensitivity (0.883 vs. 0.806). The AI showed significantly sensitivity than radiologists in detecting all lesions <20 mm (0.848 vs. 0.788). Both AI and radiologists achieved excellent detection performance for lesions ≥20 mm (0.867 vs. 0.881, P = 0.671). A remarkable agreement existed in the average tumor sizes among the three measurements (P = 0.174). DATA CONCLUSION: AI software based on deep learning exhibited practical value in automatically identifying and measuring liver lesions. TECHNICAL EFFICACY: Stage 2.

2.
Eur Radiol ; 31(11): 8775-8785, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-33934177

RESUMEN

OBJECTIVES: To investigate machine learning classifiers and interpretable models using chest CT for detection of COVID-19 and differentiation from other pneumonias, interstitial lung disease (ILD) and normal CTs. METHODS: Our retrospective multi-institutional study obtained 2446 chest CTs from 16 institutions (including 1161 COVID-19 patients). Training/validation/testing cohorts included 1011/50/100 COVID-19, 388/16/33 ILD, 189/16/33 other pneumonias, and 559/17/34 normal (no pathologies) CTs. A metric-based approach for the classification of COVID-19 used interpretable features, relying on logistic regression and random forests. A deep learning-based classifier differentiated COVID-19 via 3D features extracted directly from CT attenuation and probability distribution of airspace opacities. RESULTS: Most discriminative features of COVID-19 are the percentage of airspace opacity and peripheral and basal predominant opacities, concordant with the typical characterization of COVID-19 in the literature. Unsupervised hierarchical clustering compares feature distribution across COVID-19 and control cohorts. The metrics-based classifier achieved AUC = 0.83, sensitivity = 0.74, and specificity = 0.79 versus respectively 0.93, 0.90, and 0.83 for the DL-based classifier. Most of ambiguity comes from non-COVID-19 pneumonia with manifestations that overlap with COVID-19, as well as mild COVID-19 cases. Non-COVID-19 classification performance is 91% for ILD, 64% for other pneumonias, and 94% for no pathologies, which demonstrates the robustness of our method against different compositions of control groups. CONCLUSIONS: Our new method accurately discriminates COVID-19 from other types of pneumonia, ILD, and CTs with no pathologies, using quantitative imaging features derived from chest CT, while balancing interpretability of results and classification performance and, therefore, may be useful to facilitate diagnosis of COVID-19. KEY POINTS: • Unsupervised clustering reveals the key tomographic features including percent airspace opacity and peripheral and basal opacities most typical of COVID-19 relative to control groups. • COVID-19-positive CTs were compared with COVID-19-negative chest CTs (including a balanced distribution of non-COVID-19 pneumonia, ILD, and no pathologies). Classification accuracies for COVID-19, pneumonia, ILD, and CT scans with no pathologies are respectively 90%, 64%, 91%, and 94%. • Our deep learning (DL)-based classification method demonstrates an AUC of 0.93 (sensitivity 90%, specificity 83%). Machine learning methods applied to quantitative chest CT metrics can therefore improve diagnostic accuracy in suspected COVID-19, particularly in resource-constrained environments.


Asunto(s)
COVID-19 , Humanos , Aprendizaje Automático , Estudios Retrospectivos , SARS-CoV-2 , Tórax
3.
IEEE Trans Med Imaging ; 43(5): 1995-2009, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38224508

RESUMEN

Deep learning models have demonstrated remarkable success in multi-organ segmentation but typically require large-scale datasets with all organs of interest annotated. However, medical image datasets are often low in sample size and only partially labeled, i.e., only a subset of organs are annotated. Therefore, it is crucial to investigate how to learn a unified model on the available partially labeled datasets to leverage their synergistic potential. In this paper, we systematically investigate the partial-label segmentation problem with theoretical and empirical analyses on the prior techniques. We revisit the problem from a perspective of partial label supervision signals and identify two signals derived from ground truth and one from pseudo labels. We propose a novel two-stage framework termed COSST, which effectively and efficiently integrates comprehensive supervision signals with self-training. Concretely, we first train an initial unified model using two ground truth-based signals and then iteratively incorporate the pseudo label signal to the initial model using self-training. To mitigate performance degradation caused by unreliable pseudo labels, we assess the reliability of pseudo labels via outlier detection in latent space and exclude the most unreliable pseudo labels from each self-training iteration. Extensive experiments are conducted on one public and three private partial-label segmentation tasks over 12 CT datasets. Experimental results show that our proposed COSST achieves significant improvement over the baseline method, i.e., individual networks trained on each partially labeled dataset. Compared to the state-of-the-art partial-label segmentation methods, COSST demonstrates consistent superior performance on various segmentation tasks and with different training data sizes.


Asunto(s)
Bases de Datos Factuales , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Tomografía Computarizada por Rayos X/métodos , Aprendizaje Automático Supervisado
4.
Radiat Oncol ; 17(1): 129, 2022 Jul 22.
Artículo en Inglés | MEDLINE | ID: mdl-35869525

RESUMEN

BACKGROUND: We describe and evaluate a deep network algorithm which automatically contours organs at risk in the thorax and pelvis on computed tomography (CT) images for radiation treatment planning. METHODS: The algorithm identifies the region of interest (ROI) automatically by detecting anatomical landmarks around the specific organs using a deep reinforcement learning technique. The segmentation is restricted to this ROI and performed by a deep image-to-image network (DI2IN) based on a convolutional encoder-decoder architecture combined with multi-level feature concatenation. The algorithm is commercially available in the medical products "syngo.via RT Image Suite VB50" and "AI-Rad Companion Organs RT VA20" (Siemens Healthineers). For evaluation, thoracic CT images of 237 patients and pelvic CT images of 102 patients were manually contoured following the Radiation Therapy Oncology Group (RTOG) guidelines and compared to the DI2IN results using metrics for volume, overlap and distance, e.g., Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD95). The contours were also compared visually slice by slice. RESULTS: We observed high correlations between automatic and manual contours. The best results were obtained for the lungs (DSC 0.97, HD95 2.7 mm/2.9 mm for left/right lung), followed by heart (DSC 0.92, HD95 4.4 mm), bladder (DSC 0.88, HD95 6.7 mm) and rectum (DSC 0.79, HD95 10.8 mm). Visual inspection showed excellent agreements with some exceptions for heart and rectum. CONCLUSIONS: The DI2IN algorithm automatically generated contours for organs at risk close to those by a human expert, making the contouring step in radiation treatment planning simpler and faster. Few cases still required manual corrections, mainly for heart and rectum.


Asunto(s)
Aprendizaje Profundo , Tomografía Computarizada por Rayos X , Algoritmos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Órganos en Riesgo , Planificación de la Radioterapia Asistida por Computador/métodos , Tórax , Tomografía Computarizada por Rayos X/métodos
5.
J Nucl Med ; 61(12): 1786-1792, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32332147

RESUMEN

Prostate-specific membrane antigen (PSMA)-targeting PET imaging is becoming the reference standard for prostate cancer staging, especially in advanced disease. Yet, the implications of PSMA PET-derived whole-body tumor volume for overall survival are poorly elucidated to date. This might be because semiautomated quantification of whole-body tumor volume as a PSMA PET biomarker is an unmet clinical challenge. Therefore, in the present study we propose and evaluate a software that enables the semiautomated quantification of PSMA PET biomarkers such as whole-body tumor volume. Methods: The proposed quantification is implemented as a research prototype. PSMA-accumulating foci were automatically segmented by a percental threshold (50% of local SUVmax). Neural networks were trained to segment organs in PET/CT acquisitions (training CTs: 8,632, validation CTs: 53). Thereby, PSMA foci within organs of physiologic PSMA uptake were semiautomatically excluded from the analysis. Pretherapeutic PSMA PET/CTs of 40 consecutive patients treated with 177Lu-PSMA-617 were evaluated in this analysis. The whole-body tumor volume (PSMATV50), SUVmax, SUVmean, and other whole-body imaging biomarkers were calculated for each patient. Semiautomatically derived results were compared with manual readings in a subcohort (by 1 nuclear medicine physician). Additionally, an interobserver evaluation of the semiautomated approach was performed in a subcohort (by 2 nuclear medicine physicians). Results: Manually and semiautomatically derived PSMA metrics were highly correlated (PSMATV50: R2 = 1.000, P < 0.001; SUVmax: R2 = 0.988, P < 0.001). The interobserver agreement of the semiautomated workflow was also high (PSMATV50: R2 = 1.000, P < 0.001, interclass correlation coefficient = 1.000; SUVmax: R2 = 0.988, P < 0.001, interclass correlation coefficient = 0.997). PSMATV50 (ml) was a significant predictor of overall survival (hazard ratio: 1.004; 95% confidence interval: 1.001-1.006, P = 0.002) and remained so in a multivariate regression including other biomarkers (hazard ratio: 1.004; 95% confidence interval: 1.001-1.006 P = 0.004). Conclusion: PSMATV50 is a promising PSMA PET biomarker that is reproducible and easily quantified by the proposed semiautomated software. Moreover, PSMATV50 is a significant predictor of overall survival in patients with advanced prostate cancer who receive 177Lu-PSMA-617 therapy.


Asunto(s)
Ácido Edético/análogos & derivados , Oligopéptidos , Tomografía Computarizada por Tomografía de Emisión de Positrones , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/patología , Carga Tumoral , Anciano , Automatización , Biomarcadores de Tumor/metabolismo , Isótopos de Galio , Radioisótopos de Galio , Humanos , Procesamiento de Imagen Asistido por Computador , Masculino , Variaciones Dependientes del Observador , Neoplasias de la Próstata/sangre , Neoplasias de la Próstata/metabolismo , Programas Informáticos , Análisis de Supervivencia
6.
Eur J Radiol ; 126: 108918, 2020 May.
Artículo en Inglés | MEDLINE | ID: mdl-32171914

RESUMEN

PURPOSE: To evaluate the performance of an artificial intelligence (AI) based software solution tested on liver volumetric analyses and to compare the results to the manual contour segmentation. MATERIALS AND METHODS: We retrospectively obtained 462 multiphasic CT datasets with six series for each patient: three different contrast phases and two slice thickness reconstructions (1.5/5 mm), totaling 2772 series. AI-based liver volumes were determined using multi-scale deep-reinforcement learning for 3D body markers detection and 3D structure segmentation. The algorithm was trained for liver volumetry on approximately 5000 datasets. We computed the absolute error of each automatically- and manually-derived volume relative to the mean manual volume. The mean processing time/dataset and method was recorded. Variations of liver volumes were compared using univariate generalized linear model analyses. A subgroup of 60 datasets was manually segmented by three radiologists, with a further subgroup of 20 segmented three times by each, to compare the automatically-derived results with the ground-truth. RESULTS: The mean absolute error of the automatically-derived measurement was 44.3 mL (representing 2.37 % of the averaged liver volumes). The liver volume was neither dependent on the contrast phase (p = 0.697), nor on the slice thickness (p = 0.446). The mean processing time/dataset with the algorithm was 9.94 s (sec) compared to manual segmentation with 219.34 s. We found an excellent agreement between both approaches with an ICC value of 0.996. CONCLUSION: The results of our study demonstrate that AI-powered fully automated liver volumetric analyses can be done with excellent accuracy, reproducibility, robustness, speed and agreement with the manual segmentation.


Asunto(s)
Algoritmos , Interpretación de Imagen Asistida por Computador/métodos , Hepatopatías/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Inteligencia Artificial , Aprendizaje Profundo , Humanos , Hígado/diagnóstico por imagen , Reproducibilidad de los Resultados , Estudios Retrospectivos
7.
ArXiv ; 2020 Nov 18.
Artículo en Inglés | MEDLINE | ID: mdl-32550252

RESUMEN

PURPOSE: To present a method that automatically segments and quantifies abnormal CT patterns commonly present in coronavirus disease 2019 (COVID-19), namely ground glass opacities and consolidations. MATERIALS AND METHODS: In this retrospective study, the proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions, based on a dataset of 9749 chest CT volumes. The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities, based on deep learning and deep reinforcement learning. The first measure of (PO, PHO) is global, while the second of (LSS, LHOS) is lobewise. Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States collected between 2002-Present (April, 2020). Ground truth is established by manual annotations of lesions, lungs, and lobes. Correlation and regression analyses were performed to compare the prediction to the ground truth. RESULTS: Pearson correlation coefficient between method prediction and ground truth for COVID-19 cases was calculated as 0.92 for PO (P < .001), 0.97 for PHO(P < .001), 0.91 for LSS (P < .001), 0.90 for LHOS (P < .001). 98 of 100 healthy controls had a predicted PO of less than 1%, 2 had between 1-2%. Automated processing time to compute the severity scores was 10 seconds per case compared to 30 minutes required for manual annotations. CONCLUSION: A new method segments regions of CT abnormalities associated with COVID-19 and computes (PO, PHO), as well as (LSS, LHOS) severity scores.

8.
Radiol Artif Intell ; 2(4): e200048, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-33928255

RESUMEN

PURPOSE: To present a method that automatically segments and quantifies abnormal CT patterns commonly present in coronavirus disease 2019 (COVID-19), namely ground glass opacities and consolidations. MATERIALS AND METHODS: In this retrospective study, the proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions, based on a dataset of 9749 chest CT volumes. The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities, based on deep learning and deep reinforcement learning. The first measure of (PO, PHO) is global, while the second of (LSS, LHOS) is lobe-wise. Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States collected between 2002-Present (April 2020). Ground truth is established by manual annotations of lesions, lungs, and lobes. Correlation and regression analyses were performed to compare the prediction to the ground truth. RESULTS: Pearson correlation coefficient between method prediction and ground truth for COVID-19 cases was calculated as 0.92 for PO (P < .001), 0.97 for PHO (P < .001), 0.91 for LSS (P < .001), 0.90 for LHOS (P < .001). 98 of 100 healthy controls had a predicted PO of less than 1%, 2 had between 1-2%. Automated processing time to compute the severity scores was 10 seconds per case compared to 30 minutes required for manual annotations. CONCLUSION: A new method segments regions of CT abnormalities associated with COVID-19 and computes (PO, PHO), as well as (LSS, LHOS) severity scores.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA