Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Eur J Nucl Med Mol Imaging ; 50(8): 2441-2452, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-36933075

RESUMEN

PURPOSE: The aim of this study was to develop a convolutional neural network (CNN) for the automatic detection and segmentation of gliomas using [18F]fluoroethyl-L-tyrosine ([18F]FET) PET. METHODS: Ninety-three patients (84 in-house/7 external) who underwent a 20-40-min static [18F]FET PET scan were retrospectively included. Lesions and background regions were defined by two nuclear medicine physicians using the MIM software, such that delineations by one expert reader served as ground truth for training and testing the CNN model, while delineations by the second expert reader were used to evaluate inter-reader agreement. A multi-label CNN was developed to segment the lesion and background region while a single-label CNN was implemented for a lesion-only segmentation. Lesion detectability was evaluated by classifying [18F]FET PET scans as negative when no tumor was segmented and vice versa, while segmentation performance was assessed using the dice similarity coefficient (DSC) and segmented tumor volume. The quantitative accuracy was evaluated using the maximal and mean tumor to mean background uptake ratio (TBRmax/TBRmean). CNN models were trained and tested by a threefold cross-validation (CV) using the in-house data, while the external data was used for an independent evaluation to assess the generalizability of the two CNN models. RESULTS: Based on the threefold CV, the multi-label CNN model achieved 88.9% sensitivity and 96.5% precision for discriminating between positive and negative [18F]FET PET scans compared to a 35.3% sensitivity and 83.1% precision obtained with the single-label CNN model. In addition, the multi-label CNN allowed an accurate estimation of the maximal/mean lesion and mean background uptake, resulting in an accurate TBRmax/TBRmean estimation compared to a semi-automatic approach. In terms of lesion segmentation, the multi-label CNN model (DSC = 74.6 ± 23.1%) demonstrated equal performance as the single-label CNN model (DSC = 73.7 ± 23.2%) with tumor volumes estimated by the single-label and multi-label model (22.9 ± 23.6 ml and 23.1 ± 24.3 ml, respectively) closely approximating the tumor volumes estimated by the expert reader (24.1 ± 24.4 ml). DSCs of both CNN models were in line with the DSCs by the second expert reader compared with the lesion segmentations by the first expert reader, while detection and segmentation performance of both CNN models as determined with the in-house data were confirmed by the independent evaluation using external data. CONCLUSION: The proposed multi-label CNN model detected positive [18F]FET PET scans with high sensitivity and precision. Once detected, an accurate tumor segmentation and estimation of background activity was achieved resulting in an automatic and accurate TBRmax/TBRmean estimation, such that user interaction and potential inter-reader variability can be minimized.


Asunto(s)
Glioma , Humanos , Estudios Retrospectivos , Glioma/diagnóstico por imagen , Glioma/patología , Tomografía de Emisión de Positrones/métodos , Tirosina , Redes Neurales de la Computación
2.
Eur Radiol ; 33(2): 959-969, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36074262

RESUMEN

OBJECTIVES: To develop a visual ensemble selection of deep convolutional neural networks (CNN) for 3D segmentation of breast tumors using T1-weighted dynamic contrast-enhanced (T1-DCE) MRI. METHODS: Multi-center 3D T1-DCE MRI (n = 141) were acquired for a cohort of patients diagnosed with locally advanced or aggressive breast cancer. Tumor lesions of 111 scans were equally divided between two radiologists and segmented for training. The additional 30 scans were segmented independently by both radiologists for testing. Three 3D U-Net models were trained using either post-contrast images or a combination of post-contrast and subtraction images fused at either the image or the feature level. Segmentation accuracy was evaluated quantitatively using the Dice similarity coefficient (DSC) and the Hausdorff distance (HD95) and scored qualitatively by a radiologist as excellent, useful, helpful, or unacceptable. Based on this score, a visual ensemble approach selecting the best segmentation among these three models was proposed. RESULTS: The mean and standard deviation of DSC and HD95 between the two radiologists were equal to 77.8 ± 10.0% and 5.2 ± 5.9 mm. Using the visual ensemble selection, a DSC and HD95 equal to 78.1 ± 16.2% and 14.1 ± 40.8 mm was reached. The qualitative assessment was excellent (resp. excellent or useful) in 50% (resp. 77%). CONCLUSION: Using subtraction images in addition to post-contrast images provided complementary information for 3D segmentation of breast lesions by CNN. A visual ensemble selection allowing the radiologist to select the most optimal segmentation obtained by the three 3D U-Net models achieved comparable results to inter-radiologist agreement, yielding 77% segmented volumes considered excellent or useful. KEY POINTS: • Deep convolutional neural networks were developed using T1-weighted post-contrast and subtraction MRI to perform automated 3D segmentation of breast tumors. • A visual ensemble selection allowing the radiologist to choose the best segmentation among the three 3D U-Net models outperformed each of the three models. • The visual ensemble selection provided clinically useful segmentations in 77% of cases, potentially allowing for a valuable reduction of the manual 3D segmentation workload for the radiologist and greatly facilitating quantitative studies on non-invasive biomarker in breast MRI.


Asunto(s)
Neoplasias de la Mama , Procesamiento de Imagen Asistido por Computador , Humanos , Femenino , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Mama/patología , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/patología , Imagen por Resonancia Magnética/métodos
3.
IEEE Trans Biomed Eng ; 69(7): 2153-2164, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-34941496

RESUMEN

Convolutional neural networks (CNNs) for brain tumor segmentation are generally developed using complete sets of magnetic resonance imaging (MRI) sequences for both training and inference. As such, these algorithms are not trained for realistic, clinical scenarios where parts of the MRI sequences which were used for training, are missing during inference. To increase clinical applicability, we proposed a cross-modal distillation approach to leverage the availability of multi-sequence MRI data for training and generate an enriched CNN model which uses only single-sequence MRI data for inference but outperforms a single-sequence CNN model. We assessed the performance of the proposed method for whole tumor and tumor core segmentation with multi-sequence MRI data available for training but only T1-weighted ([Formula: see text]) sequence data available for inference, using BraTS 2018, and in-house datasets. Results showed that cross-modal distillation significantly improved the Dice score for both whole tumor and tumor core segmentation when only [Formula: see text] sequence data were available for inference. For the evaluation using the in-house dataset, cross-modal distillation achieved an average Dice score of 79.04% and 69.39% for whole tumor and tumor core segmentation, respectively, while a single-sequence U-Net model using [Formula: see text] sequence data for both training and inference achieved an average Dice score of 73.60% and 62.62%, respectively. These findings confirmed cross-modal distillation as an effective method to increase the potential of single-sequence CNN models such that segmentation performance is less compromised by missing MRI sequences or having only one MRI sequence available for segmentation.


Asunto(s)
Neoplasias Encefálicas , Procesamiento de Imagen Asistido por Computador , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/patología , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Neuroimagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...