Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Radiat Oncol ; 19(1): 106, 2024 Aug 07.
Artículo en Inglés | MEDLINE | ID: mdl-39113123

RESUMEN

PURPOSE: Convolutional Neural Networks (CNNs) have emerged as transformative tools in the field of radiation oncology, significantly advancing the precision of contouring practices. However, the adaptability of these algorithms across diverse scanners, institutions, and imaging protocols remains a considerable obstacle. This study aims to investigate the effects of incorporating institution-specific datasets into the training regimen of CNNs to assess their generalization ability in real-world clinical environments. Focusing on a data-centric analysis, the influence of varying multi- and single center training approaches on algorithm performance is conducted. METHODS: nnU-Net is trained using a dataset comprising 161 18F-PSMA-1007 PET images collected from four distinct institutions (Freiburg: n = 96, Munich: n = 19, Cyprus: n = 32, Dresden: n = 14). The dataset is partitioned such that data from each center are systematically excluded from training and used solely for testing to assess the model's generalizability and adaptability to data from unfamiliar sources. Performance is compared through a 5-Fold Cross-Validation, providing a detailed comparison between models trained on datasets from single centers to those trained on aggregated multi-center datasets. Dice Similarity Score, Hausdorff distance and volumetric analysis are used as primary evaluation metrics. RESULTS: The mixed training approach yielded a median DSC of 0.76 (IQR: 0.64-0.84) in a five-fold cross-validation, showing no significant differences (p = 0.18) compared to models trained with data exclusion from each center, which performed with a median DSC of 0.74 (IQR: 0.56-0.86). Significant performance improvements regarding multi-center training were observed for the Dresden cohort (multi-center median DSC 0.71, IQR: 0.58-0.80 vs. single-center 0.68, IQR: 0.50-0.80, p < 0.001) and Cyprus cohort (multi-center 0.74, IQR: 0.62-0.83 vs. single-center 0.72, IQR: 0.54-0.82, p < 0.01). While Munich and Freiburg also showed performance improvements with multi-center training, results showed no statistical significance (Munich: multi-center DSC 0.74, IQR: 0.60-0.80 vs. single-center 0.72, IQR: 0.59-0.82, p > 0.05; Freiburg: multi-center 0.78, IQR: 0.53-0.87 vs. single-center 0.71, IQR: 0.53-0.83, p = 0.23). CONCLUSION: CNNs trained for auto contouring intraprostatic GTV in 18F-PSMA-1007 PET on a diverse dataset from multiple centers mostly generalize well to unseen data from other centers. Training on a multicentric dataset can improve performance compared to training exclusively with a single-center dataset regarding intraprostatic 18F-PSMA-1007 PET GTV segmentation. The segmentation performance of the same CNN can vary depending on the dataset employed for training and testing.


Asunto(s)
Redes Neurales de la Computación , Neoplasias de la Próstata , Humanos , Masculino , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/radioterapia , Neoplasias de la Próstata/patología , Tomografía de Emisión de Positrones/métodos , Niacinamida/análogos & derivados , Oligopéptidos , Radiofármacos , Radioisótopos de Flúor , Procesamiento de Imagen Asistido por Computador/métodos , Conjuntos de Datos como Asunto , Algoritmos
3.
Eur Radiol ; 2024 Apr 25.
Artículo en Inglés | MEDLINE | ID: mdl-38662100

RESUMEN

OBJECTIVES: In lung cancer, one of the main limitations for the optimal integration of the biological and anatomical information derived from Positron Emission Tomography (PET) and Computed Tomography (CT) is the time and expertise required for the evaluation of the different respiratory phases. In this study, we present two open-source models able to automatically segment lung tumors on PET and CT, with and without motion compensation. MATERIALS AND METHODS: This study involved time-bin gated (4D) and non-gated (3D) PET/CT images from two prospective lung cancer cohorts (Trials 108237 and 108472) and one retrospective. For model construction, the ground truth (GT) was defined by consensus of two experts, and the nnU-Net with 5-fold cross-validation was applied to 560 4D-images for PET and 100 3D-images for CT. The test sets included 270 4D- images and 19 3D-images for PET and 80 4D-images and 27 3D-images for CT, recruited at 10 different centres. RESULTS: In the performance evaluation with the multicentre test sets, the Dice Similarity Coefficients (DSC) obtained for our PET model were DSC(4D-PET) = 0.74 ± 0.06, improving 19% relative to the DSC between experts and DSC(3D-PET) = 0.82 ± 0.11. The performance for CT was DSC(4D-CT) = 0.61 ± 0.28 and DSC(3D-CT) = 0.63 ± 0.34, improving 4% and 15% relative to DSC between experts. CONCLUSIONS: Performance evaluation demonstrated that the automatic segmentation models have the potential to achieve accuracy comparable to manual segmentation and thus hold promise for clinical application. The resulting models can be freely downloaded and employed to support the integration of 3D- or 4D- PET/CT and to facilitate the evaluation of its impact on lung cancer clinical practice. CLINICAL RELEVANCE STATEMENT: We provide two open-source nnU-Net models for the automatic segmentation of lung tumors on PET/CT to facilitate the optimal integration of biological and anatomical information in clinical practice. The models have superior performance compared to the variability observed in manual segmentations by the different experts for images with and without motion compensation, allowing to take advantage in the clinical practice of the more accurate and robust 4D-quantification. KEY POINTS: Lung tumor segmentation on PET/CT imaging is limited by respiratory motion and manual delineation is time consuming and suffer from inter- and intra-variability. Our segmentation models had superior performance compared to the manual segmentations by different experts. Automating PET image segmentation allows for easier clinical implementation of biological information.

4.
Radiother Oncol ; 188: 109774, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37394103

RESUMEN

PURPOSE: With the increased use of focal radiation dose escalation for primary prostate cancer (PCa), accurate delineation of gross tumor volume (GTV) in prostate-specific membrane antigen PET (PSMA-PET) becomes crucial. Manual approaches are time-consuming and observer dependent. The purpose of this study was to create a deep learning model for the accurate delineation of the intraprostatic GTV in PSMA-PET. METHODS: A 3D U-Net was trained on 128 different 18F-PSMA-1007 PET images from three different institutions. Testing was done on 52 patients including one independent internal cohort (Freiburg: n = 19) and three independent external cohorts (Dresden: n = 14 18F-PSMA-1007, Boston: Massachusetts General Hospital (MGH): n = 9 18F-DCFPyL-PSMA and Dana-Farber Cancer Institute (DFCI): n = 10 68Ga-PSMA-11). Expert contours were generated in consensus using a validated technique. CNN predictions were compared to expert contours using Dice similarity coefficient (DSC). Co-registered whole-mount histology was used for the internal testing cohort to assess sensitivity/specificity. RESULTS: Median DSCs were Freiburg: 0.82 (IQR: 0.73-0.88), Dresden: 0.71 (IQR: 0.53-0.75), MGH: 0.80 (IQR: 0.64-0.83) and DFCI: 0.80 (IQR: 0.67-0.84), respectively. Median sensitivity for CNN and expert contours were 0.88 (IQR: 0.68-0.97) and 0.85 (IQR: 0.75-0.88) (p = 0.40), respectively. GTV volumes did not differ significantly (p > 0.1 for all comparisons). Median specificity of 0.83 (IQR: 0.57-0.97) and 0.88 (IQR: 0.69-0.98) were observed for CNN and expert contours (p = 0.014), respectively. CNN prediction took 3.81 seconds on average per patient. CONCLUSION: The CNN was trained and tested on internal and external datasets as well as histopathology reference, achieving a fast GTV segmentation for three PSMA-PET tracers with high diagnostic accuracy comparable to manual experts.


Asunto(s)
Aprendizaje Profundo , Neoplasias de la Próstata , Masculino , Humanos , Carga Tumoral , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Planificación de la Radioterapia Asistida por Computador/métodos , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/radioterapia , Neoplasias de la Próstata/patología
5.
Comput Med Imaging Graph ; 107: 102241, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37201475

RESUMEN

In healthcare, a growing number of physicians and support staff are striving to facilitate personalized radiotherapy regimens for patients with prostate cancer. This is because individual patient biology is unique, and employing a single approach for all is inefficient. A crucial step for customizing radiotherapy planning and gaining fundamental information about the disease, is the identification and delineation of targeted structures. However, accurate biomedical image segmentation is time-consuming, requires considerable experience and is prone to observer variability. In the past decade, the use of deep learning models has significantly increased in the field of medical image segmentation. At present, a vast number of anatomical structures can be demarcated on a clinician's level with deep learning models. These models would not only unload work, but they can offer unbiased characterization of the disease. The main architectures used in segmentation are the U-Net and its variants, that exhibit outstanding performances. However, reproducing results or directly comparing methods is often limited by closed source of data and the large heterogeneity among medical images. With this in mind, our intention is to provide a reliable source for assessing deep learning models. As an example, we chose the challenging task of delineating the prostate gland in multi-modal images. First, this paper provides a comprehensive review of current state-of-the-art convolutional neural networks for 3D prostate segmentation. Second, utilizing public and in-house CT and MR datasets of varying properties, we created a framework for an objective comparison of automatic prostate segmentation algorithms. The framework was used for rigorous evaluations of the models, highlighting their strengths and weaknesses.


Asunto(s)
Próstata , Neoplasias de la Próstata , Masculino , Humanos , Próstata/diagnóstico por imagen , Benchmarking , Redes Neurales de la Computación , Algoritmos , Neoplasias de la Próstata/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...