Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
2.
Front Med (Lausanne) ; 10: 1217037, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37711738

RESUMO

Background: Radiomics can provide in-depth characterization of cancers for treatment outcome prediction. Conventional radiomics rely on extraction of image features within a pre-defined image region of interest (ROI) which are typically fed to a classification algorithm for prediction of a clinical endpoint. Deep learning radiomics allows for a simpler workflow where images can be used directly as input to a convolutional neural network (CNN) with or without a pre-defined ROI. Purpose: The purpose of this study was to evaluate (i) conventional radiomics and (ii) deep learning radiomics for predicting overall survival (OS) and disease-free survival (DFS) for patients with head and neck squamous cell carcinoma (HNSCC) using pre-treatment 18F-fluorodeoxuglucose positron emission tomography (FDG PET) and computed tomography (CT) images. Materials and methods: FDG PET/CT images and clinical data of patients with HNSCC treated with radio(chemo)therapy at Oslo University Hospital (OUS; n = 139) and Maastricht University Medical Center (MAASTRO; n = 99) were collected retrospectively. OUS data was used for model training and initial evaluation. MAASTRO data was used for external testing to assess cross-institutional generalizability. Models trained on clinical and/or conventional radiomics features, with or without feature selection, were compared to CNNs trained on PET/CT images without or with the gross tumor volume (GTV) included. Model performance was measured using accuracy, area under the receiver operating characteristic curve (AUC), Matthew's correlation coefficient (MCC), and the F1 score calculated for both classes separately. Results: CNNs trained directly on images achieved the highest performance on external data for both endpoints. Adding both clinical and radiomics features to these image-based models increased performance further. Conventional radiomics including clinical data could achieve competitive performance. However, feature selection on clinical and radiomics data lead to overfitting and poor cross-institutional generalizability. CNNs without tumor and node contours achieved close to on-par performance with CNNs including contours. Conclusion: High performance and cross-institutional generalizability can be achieved by combining clinical data, radiomics features and medical images together with deep learning models. However, deep learning models trained on images without contours can achieve competitive performance and could see potential use as an initial screening tool for high-risk patients.

3.
Front Vet Sci ; 10: 1143986, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37026102

RESUMO

Background: Radiotherapy (RT) is increasingly being used on dogs with spontaneous head and neck cancer (HNC), which account for a large percentage of veterinary patients treated with RT. Accurate definition of the gross tumor volume (GTV) is a vital part of RT planning, ensuring adequate dose coverage of the tumor while limiting the radiation dose to surrounding tissues. Currently the GTV is contoured manually in medical images, which is a time-consuming and challenging task. Purpose: The purpose of this study was to evaluate the applicability of deep learning-based automatic segmentation of the GTV in canine patients with HNC. Materials and methods: Contrast-enhanced computed tomography (CT) images and corresponding manual GTV contours of 36 canine HNC patients and 197 human HNC patients were included. A 3D U-Net convolutional neural network (CNN) was trained to automatically segment the GTV in canine patients using two main approaches: (i) training models from scratch based solely on canine CT images, and (ii) using cross-species transfer learning where models were pretrained on CT images of human patients and then fine-tuned on CT images of canine patients. For the canine patients, automatic segmentations were assessed using the Dice similarity coefficient (Dice), the positive predictive value, the true positive rate, and surface distance metrics, calculated from a four-fold cross-validation strategy where each fold was used as a validation set and test set once in independent model runs. Results: CNN models trained from scratch on canine data or by using transfer learning obtained mean test set Dice scores of 0.55 and 0.52, respectively, indicating acceptable auto-segmentations, similar to the mean Dice performances reported for CT-based automatic segmentation in human HNC studies. Automatic segmentation of nasal cavity tumors appeared particularly promising, resulting in mean test set Dice scores of 0.69 for both approaches. Conclusion: In conclusion, deep learning-based automatic segmentation of the GTV using CNN models based on canine data only or a cross-species transfer learning approach shows promise for future application in RT of canine HNC patients.

4.
Acta Oncol ; 61(1): 89-96, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34783610

RESUMO

BACKGROUND: Accurate target volume delineation is a prerequisite for high-precision radiotherapy. However, manual delineation is resource-demanding and prone to interobserver variation. An automatic delineation approach could potentially save time and increase delineation consistency. In this study, the applicability of deep learning for fully automatic delineation of the gross tumour volume (GTV) in patients with anal squamous cell carcinoma (ASCC) was evaluated for the first time. An extensive comparison of the effects single modality and multimodality combinations of computed tomography (CT), positron emission tomography (PET), and magnetic resonance imaging (MRI) have on automatic delineation quality was conducted. MATERIAL AND METHODS: 18F-fluorodeoxyglucose PET/CT and contrast-enhanced CT (ceCT) images were collected for 86 patients with ASCC. A subset of 36 patients also underwent a study-specific 3T MRI examination including T2- and diffusion-weighted imaging. The resulting two datasets were analysed separately. A two-dimensional U-Net convolutional neural network (CNN) was trained to delineate the GTV in axial image slices based on single or multimodality image input. Manual GTV delineations constituted the ground truth for CNN model training and evaluation. Models were evaluated using the Dice similarity coefficient (Dice) and surface distance metrics computed from five-fold cross-validation. RESULTS: CNN-generated automatic delineations demonstrated good agreement with the ground truth, resulting in mean Dice scores of 0.65-0.76 and 0.74-0.83 for the 86 and 36-patient datasets, respectively. For both datasets, the highest mean Dice scores were obtained using a multimodal combination of PET and ceCT (0.76-0.83). However, models based on single modality ceCT performed comparably well (0.74-0.81). T2W-only models performed acceptably but were somewhat inferior to the PET/ceCT and ceCT-based models. CONCLUSION: CNNs provided high-quality automatic GTV delineations for both single and multimodality image input, indicating that deep learning may prove a versatile tool for target volume delineation in future patients with ASCC.


Assuntos
Neoplasias do Ânus , Aprendizado Profundo , Neoplasias de Cabeça e Pescoço , Neoplasias do Ânus/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Tomografia por Emissão de Pósitrons , Tomografia Computadorizada por Raios X , Carga Tumoral
5.
Acta Oncol ; 61(2): 255-263, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34918621

RESUMO

BACKGROUND: Tumor delineation is time- and labor-intensive and prone to inter- and intraobserver variations. Magnetic resonance imaging (MRI) provides good soft tissue contrast, and functional MRI captures tissue properties that may be valuable for tumor delineation. We explored MRI-based automatic segmentation of rectal cancer using a deep learning (DL) approach. We first investigated potential improvements when including both anatomical T2-weighted (T2w) MRI and diffusion-weighted MR images (DWI). Secondly, we investigated generalizability by including a second, independent cohort. MATERIAL AND METHODS: Two cohorts of rectal cancer patients (C1 and C2) from different hospitals with 109 and 83 patients, respectively, were subject to 1.5 T MRI at baseline. T2w images were acquired for both cohorts and DWI (b-value of 500 s/mm2) for patients in C1. Tumors were manually delineated by three radiologists (two in C1, one in C2). A 2D U-Net was trained on T2w and T2w + DWI. Optimal parameters for image pre-processing and training were identified on C1 using five-fold cross-validation and patient Dice similarity coefficient (DSCp) as performance measure. The optimized models were evaluated on a C1 hold-out test set and the generalizability was investigated using C2. RESULTS: For cohort C1, the T2w model resulted in a median DSCp of 0.77 on the test set. Inclusion of DWI did not further improve the performance (DSCp 0.76). The T2w-based model trained on C1 and applied to C2 achieved a DSCp of 0.59. CONCLUSION: T2w MR-based DL models demonstrated high performance for automatic tumor segmentation, at the same level as published data on interobserver variation. DWI did not improve results further. Using DL models on unseen cohorts requires caution, and one cannot expect the same performance.


Assuntos
Imagem de Difusão por Ressonância Magnética , Neoplasias Retais , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Variações Dependentes do Observador , Neoplasias Retais/diagnóstico por imagem
6.
Phys Med Biol ; 66(6): 065012, 2021 03 04.
Artigo em Inglês | MEDLINE | ID: mdl-33666176

RESUMO

Target volume delineation is a vital but time-consuming and challenging part of radiotherapy, where the goal is to deliver sufficient dose to the target while reducing risks of side effects. For head and neck cancer (HNC) this is complicated by the complex anatomy of the head and neck region and the proximity of target volumes to organs at risk. The purpose of this study was to compare and evaluate conventional PET thresholding methods, six classical machine learning algorithms and a 2D U-Net convolutional neural network (CNN) for automatic gross tumor volume (GTV) segmentation of HNC in PET/CT images. For the latter two approaches the impact of single versus multimodality input on segmentation quality was also assessed. 197 patients were included in the study. The cohort was split into training and test sets (157 and 40 patients, respectively). Five-fold cross-validation was used on the training set for model comparison and selection. Manual GTV delineations represented the ground truth. Tresholding, classical machine learning and CNN segmentation models were ranked separately according to the cross-validation Sørensen-Dice similarity coefficient (Dice). PET thresholding gave a maximum mean Dice of 0.62, whereas classical machine learning resulted in maximum mean Dice scores of 0.24 (CT) and 0.66 (PET; PET/CT). CNN models obtained maximum mean Dice scores of 0.66 (CT), 0.68 (PET) and 0.74 (PET/CT). The difference in cross-validation Dice between multimodality PET/CT and single modality CNN models was significant (p ≤ 0.0001). The top-ranked PET/CT-based CNN model outperformed the best-performing thresholding and classical machine learning models, giving significantly better segmentations in terms of cross-validation and test set Dice, true positive rate, positive predictive value and surface distance-based metrics (p ≤ 0.0001). Thus, deep learning based on multimodality PET/CT input resulted in superior target coverage and less inclusion of surrounding normal tissue.


Assuntos
Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Humanos , Redes Neurais de Computação
7.
Eur J Nucl Med Mol Imaging ; 48(9): 2782-2792, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33559711

RESUMO

PURPOSE: Identification and delineation of the gross tumour and malignant nodal volume (GTV) in medical images are vital in radiotherapy. We assessed the applicability of convolutional neural networks (CNNs) for fully automatic delineation of the GTV from FDG-PET/CT images of patients with head and neck cancer (HNC). CNN models were compared to manual GTV delineations made by experienced specialists. New structure-based performance metrics were introduced to enable in-depth assessment of auto-delineation of multiple malignant structures in individual patients. METHODS: U-Net CNN models were trained and evaluated on images and manual GTV delineations from 197 HNC patients. The dataset was split into training, validation and test cohorts (n= 142, n = 15 and n = 40, respectively). The Dice score, surface distance metrics and the new structure-based metrics were used for model evaluation. Additionally, auto-delineations were manually assessed by an oncologist for 15 randomly selected patients in the test cohort. RESULTS: The mean Dice scores of the auto-delineations were 55%, 69% and 71% for the CT-based, PET-based and PET/CT-based CNN models, respectively. The PET signal was essential for delineating all structures. Models based on PET/CT images identified 86% of the true GTV structures, whereas models built solely on CT images identified only 55% of the true structures. The oncologist reported very high-quality auto-delineations for 14 out of the 15 randomly selected patients. CONCLUSIONS: CNNs provided high-quality auto-delineations for HNC using multimodality PET/CT. The introduced structure-wise evaluation metrics provided valuable information on CNN model strengths and weaknesses for multi-structure auto-delineation.


Assuntos
Aprendizado Profundo , Neoplasias de Cabeça e Pescoço , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Humanos , Variações Dependentes do Observador , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Carga Tumoral
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA