Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Acta Oncol ; 61(1): 89-96, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34783610

RESUMEN

BACKGROUND: Accurate target volume delineation is a prerequisite for high-precision radiotherapy. However, manual delineation is resource-demanding and prone to interobserver variation. An automatic delineation approach could potentially save time and increase delineation consistency. In this study, the applicability of deep learning for fully automatic delineation of the gross tumour volume (GTV) in patients with anal squamous cell carcinoma (ASCC) was evaluated for the first time. An extensive comparison of the effects single modality and multimodality combinations of computed tomography (CT), positron emission tomography (PET), and magnetic resonance imaging (MRI) have on automatic delineation quality was conducted. MATERIAL AND METHODS: 18F-fluorodeoxyglucose PET/CT and contrast-enhanced CT (ceCT) images were collected for 86 patients with ASCC. A subset of 36 patients also underwent a study-specific 3T MRI examination including T2- and diffusion-weighted imaging. The resulting two datasets were analysed separately. A two-dimensional U-Net convolutional neural network (CNN) was trained to delineate the GTV in axial image slices based on single or multimodality image input. Manual GTV delineations constituted the ground truth for CNN model training and evaluation. Models were evaluated using the Dice similarity coefficient (Dice) and surface distance metrics computed from five-fold cross-validation. RESULTS: CNN-generated automatic delineations demonstrated good agreement with the ground truth, resulting in mean Dice scores of 0.65-0.76 and 0.74-0.83 for the 86 and 36-patient datasets, respectively. For both datasets, the highest mean Dice scores were obtained using a multimodal combination of PET and ceCT (0.76-0.83). However, models based on single modality ceCT performed comparably well (0.74-0.81). T2W-only models performed acceptably but were somewhat inferior to the PET/ceCT and ceCT-based models. CONCLUSION: CNNs provided high-quality automatic GTV delineations for both single and multimodality image input, indicating that deep learning may prove a versatile tool for target volume delineation in future patients with ASCC.


Asunto(s)
Neoplasias del Ano , Aprendizaje Profundo , Neoplasias de Cabeza y Cuello , Neoplasias del Ano/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética , Tomografía Computarizada por Tomografía de Emisión de Positrones , Tomografía de Emisión de Positrones , Tomografía Computarizada por Rayos X , Carga Tumoral
2.
Phys Med Biol ; 66(6): 065012, 2021 03 04.
Artículo en Inglés | MEDLINE | ID: mdl-33666176

RESUMEN

Target volume delineation is a vital but time-consuming and challenging part of radiotherapy, where the goal is to deliver sufficient dose to the target while reducing risks of side effects. For head and neck cancer (HNC) this is complicated by the complex anatomy of the head and neck region and the proximity of target volumes to organs at risk. The purpose of this study was to compare and evaluate conventional PET thresholding methods, six classical machine learning algorithms and a 2D U-Net convolutional neural network (CNN) for automatic gross tumor volume (GTV) segmentation of HNC in PET/CT images. For the latter two approaches the impact of single versus multimodality input on segmentation quality was also assessed. 197 patients were included in the study. The cohort was split into training and test sets (157 and 40 patients, respectively). Five-fold cross-validation was used on the training set for model comparison and selection. Manual GTV delineations represented the ground truth. Tresholding, classical machine learning and CNN segmentation models were ranked separately according to the cross-validation Sørensen-Dice similarity coefficient (Dice). PET thresholding gave a maximum mean Dice of 0.62, whereas classical machine learning resulted in maximum mean Dice scores of 0.24 (CT) and 0.66 (PET; PET/CT). CNN models obtained maximum mean Dice scores of 0.66 (CT), 0.68 (PET) and 0.74 (PET/CT). The difference in cross-validation Dice between multimodality PET/CT and single modality CNN models was significant (p ≤ 0.0001). The top-ranked PET/CT-based CNN model outperformed the best-performing thresholding and classical machine learning models, giving significantly better segmentations in terms of cross-validation and test set Dice, true positive rate, positive predictive value and surface distance-based metrics (p ≤ 0.0001). Thus, deep learning based on multimodality PET/CT input resulted in superior target coverage and less inclusion of surrounding normal tissue.


Asunto(s)
Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Humanos , Redes Neurales de la Computación
3.
Eur J Nucl Med Mol Imaging ; 48(9): 2782-2792, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-33559711

RESUMEN

PURPOSE: Identification and delineation of the gross tumour and malignant nodal volume (GTV) in medical images are vital in radiotherapy. We assessed the applicability of convolutional neural networks (CNNs) for fully automatic delineation of the GTV from FDG-PET/CT images of patients with head and neck cancer (HNC). CNN models were compared to manual GTV delineations made by experienced specialists. New structure-based performance metrics were introduced to enable in-depth assessment of auto-delineation of multiple malignant structures in individual patients. METHODS: U-Net CNN models were trained and evaluated on images and manual GTV delineations from 197 HNC patients. The dataset was split into training, validation and test cohorts (n= 142, n = 15 and n = 40, respectively). The Dice score, surface distance metrics and the new structure-based metrics were used for model evaluation. Additionally, auto-delineations were manually assessed by an oncologist for 15 randomly selected patients in the test cohort. RESULTS: The mean Dice scores of the auto-delineations were 55%, 69% and 71% for the CT-based, PET-based and PET/CT-based CNN models, respectively. The PET signal was essential for delineating all structures. Models based on PET/CT images identified 86% of the true GTV structures, whereas models built solely on CT images identified only 55% of the true structures. The oncologist reported very high-quality auto-delineations for 14 out of the 15 randomly selected patients. CONCLUSIONS: CNNs provided high-quality auto-delineations for HNC using multimodality PET/CT. The introduced structure-wise evaluation metrics provided valuable information on CNN model strengths and weaknesses for multi-structure auto-delineation.


Asunto(s)
Aprendizaje Profundo , Neoplasias de Cabeza y Cuello , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Humanos , Variaciones Dependientes del Observador , Tomografía Computarizada por Tomografía de Emisión de Positrones , Carga Tumoral
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA