Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Eur Radiol ; 32(6): 4292-4303, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35029730

RESUMEN

OBJECTIVES: To compare the lung CT volume (CTvol) and pulmonary function tests in an interstitial lung disease (ILD) population. Then to evaluate the CTvol loss between idiopathic pulmonary fibrosis (IPF) and non-IPF and explore a prognostic value of annual CTvol loss in IPF. METHODS: We conducted in an expert center a retrospective study between 2005 and 2018 on consecutive patients with ILD. CTvol was measured automatically using commercial software based on a deep learning algorithm. In the first group, Spearman correlation coefficients (r) between forced vital capacity (FVC), total lung capacity (TLC), and CTvol were calculated. In a second group, annual CTvol loss was calculated using linear regression analysis and compared with the Mann-Whitney test. In a last group of IPF patients, annual CTvol loss was calculated between baseline and 1-year CTs for investigating with the Youden index a prognostic value of major adverse event at 3 years. Univariate and log-rank tests were calculated. RESULTS: In total, 560 patients (4610 CTs) were analyzed. For 1171 CTs, CTvol was correlated with FVC (r: 0.86) and TLC (r: 0.84) (p < 0.0001). In 408 patients (3332 CT), median annual CTvol loss was 155.7 mL in IPF versus 50.7 mL in non-IPF (p < 0.0001) over 5.03 years. In 73 IPF patients, a relative annual CTvol loss of 7.9% was associated with major adverse events (log-rank, p < 0.0001) in univariate analysis (p < 0.001). CONCLUSIONS: Automated lung CT volume may be an alternative or a complementary biomarker to pulmonary function tests for the assessment of lung volume loss in ILD. KEY POINTS: • There is a good correlation between lung CT volume and forced vital capacity, as well as for with total lung capacity measurements (r of 0.86 and 0.84 respectively, p < 0.0001). • Median annual CT volume loss is significantly higher in patients with idiopathic pulmonary fibrosis than in patients with other fibrotic interstitial lung diseases (155.7 versus 50.7 mL, p < 0.0001). • In idiopathic pulmonary fibrosis, a relative annual CT volume loss higher than 9.4% is associated with a significantly reduced mean survival time at 2.0 years versus 2.8 years (log-rank, p < 0.0001).


Asunto(s)
Fibrosis Pulmonar Idiopática , Enfermedades Pulmonares Intersticiales , Humanos , Fibrosis Pulmonar Idiopática/diagnóstico por imagen , Pulmón/diagnóstico por imagen , Enfermedades Pulmonares Intersticiales/diagnóstico por imagen , Mediciones del Volumen Pulmonar , Estudios Retrospectivos , Tomografía Computarizada por Rayos X/métodos , Capacidad Vital
2.
Eur Radiol ; 32(7): 4780-4790, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-35142898

RESUMEN

OBJECTIVE: This study aimed to develop and investigate the performance of a deep learning model based on a convolutional neural network (CNN) for the automatic segmentation of polycystic livers at CT imaging. METHOD: This retrospective study used CT images of polycystic livers. To develop the CNN, supervised training and validation phases were performed using 190 CT series. To assess performance, the test phase was performed using 41 CT series. Manual segmentation by an expert radiologist (Rad1a) served as reference for all comparisons. Intra-observer variability was determined by the same reader after 12 weeks (Rad1b), and inter-observer variability by a second reader (Rad2). The Dice similarity coefficient (DSC) evaluated overlap between segmentations. CNN performance was assessed using the concordance correlation coefficient (CCC) and the two-by-two difference between the CCCs; their confidence interval was estimated with bootstrap and Bland-Altman analyses. Liver segmentation time was automatically recorded for each method. RESULTS: A total of 231 series from 129 CT examinations on 88 consecutive patients were collected. For the CNN, the DSC was 0.95 ± 0.03 and volume analyses yielded a CCC of 0.995 compared with reference. No statistical difference was observed in the CCC between CNN automatic segmentation and manual segmentations performed to evaluate inter-observer and intra-observer variability. While manual segmentation required 22.4 ± 10.4 min, central and graphics processing units took an average of 5.0 ± 2.1 s and 2.0 ± 1.4 s, respectively. CONCLUSION: Compared with manual segmentation, automated segmentation of polycystic livers using a deep learning method achieved much faster segmentation with similar performance. KEY POINTS: • Automatic volumetry of polycystic livers using artificial intelligence method allows much faster segmentation than expert manual segmentation with similar performance. • No statistical difference was observed between automatic segmentation, inter-observer variability, or intra-observer variability.


Asunto(s)
Aprendizaje Profundo , Inteligencia Artificial , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Hígado/diagnóstico por imagen , Estudios Retrospectivos , Tomografía Computarizada por Rayos X/métodos
3.
Med Phys ; 2024 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-39140793

RESUMEN

BACKGROUND: Recent advancements in anomaly detection have paved the way for novel radiological reading assistance tools that support the identification of findings, aimed at saving time. The clinical adoption of such applications requires a low rate of false positives while maintaining high sensitivity. PURPOSE: In light of recent interest and development in multi pathology identification, we present a novel method, based on a recent contrastive self-supervised approach, for multiple chest-related abnormality identification including low lung density area ("LLDA"), consolidation ("CONS"), nodules ("NOD") and interstitial pattern ("IP"). Our approach alerts radiologists about abnormal regions within a computed tomography (CT) scan by providing 3D localization. METHODS: We introduce a new method for the classification and localization of multiple chest pathologies in 3D Chest CT scans. Our goal is to distinguish four common chest-related abnormalities: "LLDA", "CONS", "NOD", "IP" and "NORMAL". This method is based on a 3D patch-based classifier with a Resnet backbone encoder pretrained leveraging recent contrastive self supervised approach and a fine-tuned classification head. We leverage the SimCLR contrastive framework for pretraining on an unannotated dataset of randomly selected patches and we then fine-tune it on a labeled dataset. During inference, this classifier generates probability maps for each abnormality across the CT volume, which are aggregated to produce a multi-label patient-level prediction. We compare different training strategies, including random initialization, ImageNet weight initialization, frozen SimCLR pretrained weights and fine-tuned SimCLR pretrained weights. Each training strategy is evaluated on a validation set for hyperparameter selection and tested on a test set. Additionally, we explore the fine-tuned SimCLR pretrained classifier for 3D pathology localization and conduct qualitative evaluation. RESULTS: Validated on 111 chest scans for hyperparameter selection and subsequently tested on 251 chest scans with multi-abnormalities, our method achieves an AUROC of 0.931 (95% confidence interval [CI]: [0.9034, 0.9557], p $ p$ -value < 0.001) and 0.963 (95% CI: [0.952, 0.976], p $ p$ -value < 0.001) in the multi-label and binary (i.e., normal versus abnormal) settings, respectively. Notably, our method surpasses the area under the receiver operating characteristic (AUROC) threshold of 0.9 for two abnormalities: IP (0.974) and LLDA (0.952), while achieving values of 0.853 and 0.791 for NOD and CONS, respectively. Furthermore, our results highlight the superiority of incorporating contrastive pretraining within the patch classifier, outperforming Imagenet pretraining weights and non-pretrained counterparts with uninitialized weights (F1 score = 0.943, 0.792, and 0.677 respectively). Qualitatively, the method achieved a satisfactory 88.8% completeness rate in localization and maintained an 88.3% accuracy rate against false positives. CONCLUSIONS: The proposed method integrates self-supervised learning algorithms for pretraining, utilizes a patch-based approach for 3D pathology localization and develops an aggregation method for multi-label prediction at patient-level. It shows promise in efficiently detecting and localizing multiple anomalies within a single scan.

4.
Diagn Interv Imaging ; 105(3): 97-103, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38261553

RESUMEN

PURPOSE: The purpose of this study was to propose a deep learning-based approach to detect pulmonary embolism and quantify its severity using the Qanadli score and the right-to-left ventricle diameter (RV/LV) ratio on three-dimensional (3D) computed tomography pulmonary angiography (CTPA) examinations with limited annotations. MATERIALS AND METHODS: Using a database of 3D CTPA examinations of 1268 patients with image-level annotations, and two other public datasets of CTPA examinations from 91 (CAD-PE) and 35 (FUME-PE) patients with pixel-level annotations, a pipeline consisting of: (i), detecting blood clots; (ii), performing PE-positive versus negative classification; (iii), estimating the Qanadli score; and (iv), predicting RV/LV diameter ratio was followed. The method was evaluated on a test set including 378 patients. The performance of PE classification and severity quantification was quantitatively assessed using an area under the curve (AUC) analysis for PE classification and a coefficient of determination (R²) for the Qanadli score and the RV/LV diameter ratio. RESULTS: Quantitative evaluation led to an overall AUC of 0.870 (95% confidence interval [CI]: 0.850-0.900) for PE classification task on the training set and an AUC of 0.852 (95% CI: 0.810-0.890) on the test set. Regression analysis yielded R² value of 0.717 (95% CI: 0.668-0.760) and of 0.723 (95% CI: 0.668-0.766) for the Qanadli score and the RV/LV diameter ratio estimation, respectively on the test set. CONCLUSION: This study shows the feasibility of utilizing AI-based assistance tools in detecting blood clots and estimating PE severity scores with 3D CTPA examinations. This is achieved by leveraging blood clots and cardiac segmentations. Further studies are needed to assess the effectiveness of these tools in clinical practice.


Asunto(s)
Aprendizaje Profundo , Embolia Pulmonar , Trombosis , Humanos , Tomografía Computarizada por Rayos X/métodos , Embolia Pulmonar/diagnóstico por imagen , Ventrículos Cardíacos , Estudios Retrospectivos
5.
Med Phys ; 49(2): 1108-1122, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-34689353

RESUMEN

PURPOSE: In computed tomography (CT) cardiovascular imaging, the numerous contrast injection protocols used to enhance structures make it difficult to gather training datasets for deep learning applications supporting diverse protocols. Moreover, creating annotations on noncontrast scans is extremely tedious. Recently, spectral CT's virtual-noncontrast images (VNC) have been used as data augmentation to train segmentation networks performing on enhanced and true-noncontrast (TNC) scans alike, while improving results on protocols absent of their training dataset. However, spectral data are not widely available, making it difficult to gather specific datasets for each task. As a solution, we present a data augmentation workflow based on a trained image translation network, to bring spectral-like augmentation to any conventional CT dataset. METHOD: The conventional CT-to-spectral image translation network (HUSpectNet) was first trained to generate VNC from conventional housnfied units images (HU), using an unannotated spectral dataset of 1830 patients. It was then tested on a second dataset of 300 spectral CT scans by comparing VNC generated through deep learning (VNCDL ) to their true counterparts. To illustrate and compare our workflow's efficiency with true spectral augmentation, HUSpectNet was applied to a third dataset of 112 spectral scans to generate VNCDL along HU and VNC images. Three different three-dimensional (3D) networks (U-Net, X-Net, and U-Net++) were trained for multilabel heart segmentation, following four augmentation strategies. As baselines, trainings were performed on contrasted images without (HUonly) and with conventional gray-values augmentation (HUaug). Then, the same networks were trained using a proportion of contrasted and VNC/VNCDL images (TrueSpec/GenSpec). Each training strategy applied to each architecture was evaluated using Dice coefficients on a fourth multicentric multivendor single-energy CT dataset of 121 patients, including different contrast injection protocols and unenhanced scans. The U-Net++ results were further explored with distance metrics on every label. RESULTS: Tested on 300 full scans, our HUSpectNet translation network shows a mean absolute error of 6.70 ± 2.83 HU between VNCDL and VNC, while peak signal-to-noise ratio reaches 43.89 dB. GenSpec and TrueSpec show very close results regardless of the protocol and used architecture: mean Dice coefficients (DSCmean ) are equal with a margin of 0.006, ranging from 0.879 to 0.938. Their performances significantly increase on TNC scans (p-values < 0.017 for all architectures) compared to HUonly and HUaug, with DSCmean of 0.448/0.770/0.879/0.885 for HUonly/HUaug/TrueSpec/GenSpec using the U-Net++ architecture. Significant improvements are also noted for all architectures on chest-abdominal-pelvic scans (p-values < 0.007) compared to HUonly and for pulmonary embolism scans (p-values < 0.039) compared to HUaug. Using U-Net++, DSCmean reaches 0.892/0.901/0.903 for HUonly/TrueSpec/GenSpec on pulmonary embolism scans and 0.872/0.896/0.896 for HUonly/TrueSpec/GenSpec on chest-abdominal-pelvic scans. CONCLUSION: Using the proposed workflow, we trained versatile heart segmentation networks on a dataset of conventional enhanced CT scans, providing robust predictions on both enhanced scans with different contrast injection protocols and TNC scans. The performances obtained were not significantly inferior to training the model on a genuine spectral CT dataset, regardless of the architecture implemented. Using a general-purpose conventional-to-spectral CT translation network as data augmentation could therefore contribute to reducing data collection and annotation requirements for machine learning-based CT studies, while extending their range of application.


Asunto(s)
Tórax , Tomografía Computarizada por Rayos X , Corazón/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador , Relación Señal-Ruido , Flujo de Trabajo
6.
Res Diagn Interv Imaging ; 4: 100018, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37284031

RESUMEN

Objectives: We evaluated the contribution of lung lesion quantification on chest CT using a clinical Artificial Intelligence (AI) software in predicting death and intensive care units (ICU) admission for COVID-19 patients. Methods: For 349 patients with positive COVID-19-PCR test that underwent a chest CT scan at admittance or during hospitalization, we applied the AI for lung and lung lesion segmentation to obtain lesion volume (LV), and LV/Total Lung Volume (TLV) ratio. ROC analysis was used to extract the best CT criterion in predicting death and ICU admission. Two prognostic models using multivariate logistic regressions were constructed to predict each outcome and were compared using AUC values. The first model ("Clinical") was based on patients' characteristics and clinical symptoms only. The second model ("Clinical+LV/TLV") included also the best CT criterion. Results: LV/TLV ratio demonstrated best performance for both outcomes; AUC of 67.8% (95% CI: 59.5 - 76.1) and 81.1% (95% CI: 75.7 - 86.5) respectively. Regarding death prediction, AUC values were 76.2% (95% CI: 69.9 - 82.6) and 79.9% (95%IC: 74.4 - 85.5) for the "Clinical" and the "Clinical+LV/TLV" models respectively, showing significant performance increase (+ 3.7%; p-value<0.001) when adding LV/TLV ratio. Similarly, for ICU admission prediction, AUC values were 74.9% (IC 95%: 69.2 - 80.6) and 84.8% (IC 95%: 80.4 - 89.2) respectively corresponding to significant performance increase (+ 10%: p-value<0.001). Conclusions: Using a clinical AI software to quantify the COVID-19 lung involvement on chest CT, combined with clinical variables, allows better prediction of death and ICU admission.

7.
Int J Comput Assist Radiol Surg ; 16(10): 1699-1709, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34363582

RESUMEN

PURPOSE: Recently, machine learning has outperformed established tools for automated segmentation in medical imaging. However, segmentation of cardiac chambers still proves challenging due to the variety of contrast agent injection protocols used in clinical practice, inducing disparities of contrast between cavities. Hence, training a generalist network requires large training datasets representative of these protocols. Furthermore, segmentation on unenhanced CT scans is further hindered by the challenge of obtaining ground truths from these images. Newly available spectral CT scanners allow innovative image reconstructions such as virtual non-contrast (VNC) imaging, mimicking non-contrasted conventional CT studies from a contrasted scan. Recent publications have demonstrated that networks can be trained using VNC to segment contrasted and unenhanced conventional CT scans to reduce annotated data requirements and the need for annotations on unenhanced scans. We propose an extensive evaluation of this statement. METHOD: We undertake multiple trainings of a 3D multi-label heart segmentation network with (HU-VNC) and without (HUonly) VNC as augmentation, using decreasing training dataset sizes (114, 76, 57, 38, 29, 19 patients). At each step, both networks are tested on a multi-vendor, multi-centric dataset of 122 patients, including different protocols: pulmonary embolism (PE), chest-abdomen-pelvis (CAP), heart CT angiography (CTA) and true non-contrast scans (TNC). An in-depth comparison of resulting Dice coefficients and distance metrics is performed for the networks trained on the largest dataset. RESULTS: HU-VNC-trained on 57 patients significantly outperforms HUonly trained on 114 regarding CAP and TNC scans (mean Dice coefficients of 0.881/0.835 and 0.882/0.416, respectively). When trained on the largest dataset, significant improvements in all labels are noted for TNC and CAP scans (mean Dice coefficient of 0.882/0.416 and 0.891/0.835, respectively). CONCLUSION: Adding VNC images as training augmentation allows the network to perform on unenhanced scans and improves segmentations on other imaging protocols, while using a reduced training dataset.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Angiografía por Tomografía Computarizada , Corazón , Humanos , Tórax
8.
Inf Process Med Imaging ; 20: 283-95, 2007.
Artículo en Inglés | MEDLINE | ID: mdl-17633707

RESUMEN

Segmentation of anatomical structures via minimal surface extraction using gradient-based metrics is a popular approach, but exhibits some limits in the case of weak or missing contour information. We propose a new framework to define metrics, robust to missing image information. Given an object of interest we combine gray-level information and knowledge about the spatial organization of cerebral structures, into a fuzzy set which is guaranteed to include the object's boundaries. From this set we derive a metric which is used in a minimal surface segmentation framework. We show how this metric leads to improved segmentation of subcortical gray matter structures. Quantitative results on the segmentation of the caudate nucleus in T1 MRI are reported on 18 normal subjects and 6 pathological cases.


Asunto(s)
Inteligencia Artificial , Neoplasias Encefálicas/diagnóstico , Núcleo Caudado/patología , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Algoritmos , Análisis por Conglomerados , Lógica Difusa , Humanos , Radiometría/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA