Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Int J Comput Assist Radiol Surg ; 16(10): 1699-1709, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34363582

RESUMO

PURPOSE: Recently, machine learning has outperformed established tools for automated segmentation in medical imaging. However, segmentation of cardiac chambers still proves challenging due to the variety of contrast agent injection protocols used in clinical practice, inducing disparities of contrast between cavities. Hence, training a generalist network requires large training datasets representative of these protocols. Furthermore, segmentation on unenhanced CT scans is further hindered by the challenge of obtaining ground truths from these images. Newly available spectral CT scanners allow innovative image reconstructions such as virtual non-contrast (VNC) imaging, mimicking non-contrasted conventional CT studies from a contrasted scan. Recent publications have demonstrated that networks can be trained using VNC to segment contrasted and unenhanced conventional CT scans to reduce annotated data requirements and the need for annotations on unenhanced scans. We propose an extensive evaluation of this statement. METHOD: We undertake multiple trainings of a 3D multi-label heart segmentation network with (HU-VNC) and without (HUonly) VNC as augmentation, using decreasing training dataset sizes (114, 76, 57, 38, 29, 19 patients). At each step, both networks are tested on a multi-vendor, multi-centric dataset of 122 patients, including different protocols: pulmonary embolism (PE), chest-abdomen-pelvis (CAP), heart CT angiography (CTA) and true non-contrast scans (TNC). An in-depth comparison of resulting Dice coefficients and distance metrics is performed for the networks trained on the largest dataset. RESULTS: HU-VNC-trained on 57 patients significantly outperforms HUonly trained on 114 regarding CAP and TNC scans (mean Dice coefficients of 0.881/0.835 and 0.882/0.416, respectively). When trained on the largest dataset, significant improvements in all labels are noted for TNC and CAP scans (mean Dice coefficient of 0.882/0.416 and 0.891/0.835, respectively). CONCLUSION: Adding VNC images as training augmentation allows the network to perform on unenhanced scans and improves segmentations on other imaging protocols, while using a reduced training dataset.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Angiografia por Tomografia Computadorizada , Coração , Humanos , Tórax
2.
J Biomed Opt ; 20(8): 80502, 2015 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-26263413

RESUMO

To enable tissue function-based tumor diagnosis over the large number of existing digital mammography systems worldwide, we propose a cost-effective and robust approach to incorporate tomographic optical tissue characterization with separately acquired digital mammograms. Using a flexible contour-based registration algorithm, we were able to incorporate an independently measured two-dimensional x-ray mammogram as structural priors in a joint optical/x-ray image reconstruction, resulting in improved spatial details in the optical images and robust optical property estimation. We validated this approach with a retrospective clinical study of 67 patients, including 30 malignant and 37 benign cases, and demonstrated that the proposed approach can help to distinguish malignant from solid benign lesions and fibroglandular tissues, with a performance comparable to the approach using spatially coregistered optical/x-ray measurements.


Assuntos
Neoplasias da Mama/diagnóstico , Interpretação de Imagem Assistida por Computador/métodos , Mamografia/métodos , Imagem Multimodal/métodos , Técnica de Subtração , Tomografia Óptica/métodos , Algoritmos , Estudos de Viabilidade , Feminino , Humanos , Aumento da Imagem/métodos , Reconhecimento Automatizado de Padrão/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA