Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
1.
Eur Radiol ; 32(7): 4780-4790, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35142898

RESUMO

OBJECTIVE: This study aimed to develop and investigate the performance of a deep learning model based on a convolutional neural network (CNN) for the automatic segmentation of polycystic livers at CT imaging. METHOD: This retrospective study used CT images of polycystic livers. To develop the CNN, supervised training and validation phases were performed using 190 CT series. To assess performance, the test phase was performed using 41 CT series. Manual segmentation by an expert radiologist (Rad1a) served as reference for all comparisons. Intra-observer variability was determined by the same reader after 12 weeks (Rad1b), and inter-observer variability by a second reader (Rad2). The Dice similarity coefficient (DSC) evaluated overlap between segmentations. CNN performance was assessed using the concordance correlation coefficient (CCC) and the two-by-two difference between the CCCs; their confidence interval was estimated with bootstrap and Bland-Altman analyses. Liver segmentation time was automatically recorded for each method. RESULTS: A total of 231 series from 129 CT examinations on 88 consecutive patients were collected. For the CNN, the DSC was 0.95 ± 0.03 and volume analyses yielded a CCC of 0.995 compared with reference. No statistical difference was observed in the CCC between CNN automatic segmentation and manual segmentations performed to evaluate inter-observer and intra-observer variability. While manual segmentation required 22.4 ± 10.4 min, central and graphics processing units took an average of 5.0 ± 2.1 s and 2.0 ± 1.4 s, respectively. CONCLUSION: Compared with manual segmentation, automated segmentation of polycystic livers using a deep learning method achieved much faster segmentation with similar performance. KEY POINTS: • Automatic volumetry of polycystic livers using artificial intelligence method allows much faster segmentation than expert manual segmentation with similar performance. • No statistical difference was observed between automatic segmentation, inter-observer variability, or intra-observer variability.


Assuntos
Aprendizado Profundo , Inteligência Artificial , Humanos , Processamento de Imagem Assistida por Computador/métodos , Fígado/diagnóstico por imagem , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos
2.
Eur Radiol ; 32(6): 4292-4303, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35029730

RESUMO

OBJECTIVES: To compare the lung CT volume (CTvol) and pulmonary function tests in an interstitial lung disease (ILD) population. Then to evaluate the CTvol loss between idiopathic pulmonary fibrosis (IPF) and non-IPF and explore a prognostic value of annual CTvol loss in IPF. METHODS: We conducted in an expert center a retrospective study between 2005 and 2018 on consecutive patients with ILD. CTvol was measured automatically using commercial software based on a deep learning algorithm. In the first group, Spearman correlation coefficients (r) between forced vital capacity (FVC), total lung capacity (TLC), and CTvol were calculated. In a second group, annual CTvol loss was calculated using linear regression analysis and compared with the Mann-Whitney test. In a last group of IPF patients, annual CTvol loss was calculated between baseline and 1-year CTs for investigating with the Youden index a prognostic value of major adverse event at 3 years. Univariate and log-rank tests were calculated. RESULTS: In total, 560 patients (4610 CTs) were analyzed. For 1171 CTs, CTvol was correlated with FVC (r: 0.86) and TLC (r: 0.84) (p < 0.0001). In 408 patients (3332 CT), median annual CTvol loss was 155.7 mL in IPF versus 50.7 mL in non-IPF (p < 0.0001) over 5.03 years. In 73 IPF patients, a relative annual CTvol loss of 7.9% was associated with major adverse events (log-rank, p < 0.0001) in univariate analysis (p < 0.001). CONCLUSIONS: Automated lung CT volume may be an alternative or a complementary biomarker to pulmonary function tests for the assessment of lung volume loss in ILD. KEY POINTS: • There is a good correlation between lung CT volume and forced vital capacity, as well as for with total lung capacity measurements (r of 0.86 and 0.84 respectively, p < 0.0001). • Median annual CT volume loss is significantly higher in patients with idiopathic pulmonary fibrosis than in patients with other fibrotic interstitial lung diseases (155.7 versus 50.7 mL, p < 0.0001). • In idiopathic pulmonary fibrosis, a relative annual CT volume loss higher than 9.4% is associated with a significantly reduced mean survival time at 2.0 years versus 2.8 years (log-rank, p < 0.0001).


Assuntos
Fibrose Pulmonar Idiopática , Doenças Pulmonares Intersticiais , Humanos , Fibrose Pulmonar Idiopática/diagnóstico por imagem , Pulmão/diagnóstico por imagem , Doenças Pulmonares Intersticiais/diagnóstico por imagem , Medidas de Volume Pulmonar , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos , Capacidade Vital
3.
Int J Comput Assist Radiol Surg ; 16(10): 1699-1709, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34363582

RESUMO

PURPOSE: Recently, machine learning has outperformed established tools for automated segmentation in medical imaging. However, segmentation of cardiac chambers still proves challenging due to the variety of contrast agent injection protocols used in clinical practice, inducing disparities of contrast between cavities. Hence, training a generalist network requires large training datasets representative of these protocols. Furthermore, segmentation on unenhanced CT scans is further hindered by the challenge of obtaining ground truths from these images. Newly available spectral CT scanners allow innovative image reconstructions such as virtual non-contrast (VNC) imaging, mimicking non-contrasted conventional CT studies from a contrasted scan. Recent publications have demonstrated that networks can be trained using VNC to segment contrasted and unenhanced conventional CT scans to reduce annotated data requirements and the need for annotations on unenhanced scans. We propose an extensive evaluation of this statement. METHOD: We undertake multiple trainings of a 3D multi-label heart segmentation network with (HU-VNC) and without (HUonly) VNC as augmentation, using decreasing training dataset sizes (114, 76, 57, 38, 29, 19 patients). At each step, both networks are tested on a multi-vendor, multi-centric dataset of 122 patients, including different protocols: pulmonary embolism (PE), chest-abdomen-pelvis (CAP), heart CT angiography (CTA) and true non-contrast scans (TNC). An in-depth comparison of resulting Dice coefficients and distance metrics is performed for the networks trained on the largest dataset. RESULTS: HU-VNC-trained on 57 patients significantly outperforms HUonly trained on 114 regarding CAP and TNC scans (mean Dice coefficients of 0.881/0.835 and 0.882/0.416, respectively). When trained on the largest dataset, significant improvements in all labels are noted for TNC and CAP scans (mean Dice coefficient of 0.882/0.416 and 0.891/0.835, respectively). CONCLUSION: Adding VNC images as training augmentation allows the network to perform on unenhanced scans and improves segmentations on other imaging protocols, while using a reduced training dataset.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Angiografia por Tomografia Computadorizada , Coração , Humanos , Tórax
4.
Inf Process Med Imaging ; 20: 283-95, 2007.
Artigo em Inglês | MEDLINE | ID: mdl-17633707

RESUMO

Segmentation of anatomical structures via minimal surface extraction using gradient-based metrics is a popular approach, but exhibits some limits in the case of weak or missing contour information. We propose a new framework to define metrics, robust to missing image information. Given an object of interest we combine gray-level information and knowledge about the spatial organization of cerebral structures, into a fuzzy set which is guaranteed to include the object's boundaries. From this set we derive a metric which is used in a minimal surface segmentation framework. We show how this metric leads to improved segmentation of subcortical gray matter structures. Quantitative results on the segmentation of the caudate nucleus in T1 MRI are reported on 18 normal subjects and 6 pathological cases.


Assuntos
Inteligência Artificial , Neoplasias Encefálicas/diagnóstico , Núcleo Caudado/patologia , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Análise por Conglomerados , Lógica Fuzzy , Humanos , Radiometria/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA