Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 258, 2024 01 02.
Artigo em Inglês | MEDLINE | ID: mdl-38167665

RESUMO

Radiomics objectively quantifies image information through numerical metrics known as features. In this study, we investigated the stability of magnetic resonance imaging (MRI)-based radiomics features in rectal cancer using both anatomical MRI and quantitative MRI (qMRI), when different methods to define the tumor volume were used. Second, we evaluated the prognostic value of stable features associated to 5-year progression-free survival (PFS) and overall survival (OS). On a 1.5 T MRI scanner, 81 patients underwent diagnostic MRI, an extended diffusion-weighted sequence with calculation of the apparent diffusion coefficient (ADC) and a multiecho dynamic contrast sequence generating both dynamic contrast-enhanced and dynamic susceptibility contrast (DSC) MR, allowing quantification of Ktrans, blood flow (BF) and area under the DSC curve (AUC). Radiomic features were extracted from T2w images and from ADC, Ktrans, BF and AUC maps. Tumor volumes were defined with three methods; machine learning, deep learning and manual delineations. The interclass correlation coefficient (ICC) assessed the stability of features. Internal validation was performed on 1000 bootstrap resamples in terms of discrimination, calibration and decisional benefit. For each combination of image and volume definition, 94 features were extracted. Features from qMRI contained higher prognostic potential than features from anatomical MRI. When stable features (> 90% ICC) were compared with clinical parameters, qMRI features demonstrated the best prognostic potential. A feature extracted from the DSC MRI parameter BF was associated with both PFS (p = 0.004) and OS (p = 0.004). In summary, stable qMRI-based radiomics features was identified, in particular, a feature based on BF from DSC MRI was associated with both PFS and OS.


Assuntos
Radiômica , Neoplasias Retais , Humanos , Imageamento por Ressonância Magnética/métodos , Imagem de Difusão por Ressonância Magnética/métodos , Prognóstico , Neoplasias Retais/diagnóstico por imagem , Estudos Retrospectivos
2.
Phys Imaging Radiat Oncol ; 22: 77-84, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35602548

RESUMO

Background and purpose: Tumor delineation is required both for radiotherapy planning and quantitative imaging biomarker purposes. It is a manual, time- and labor-intensive process prone to inter- and intraobserver variations. Semi or fully automatic segmentation could provide better efficiency and consistency. This study aimed to investigate the influence of including and combining functional with anatomical magnetic resonance imaging (MRI) sequences on the quality of automatic segmentations. Materials and methods: T2-weighted (T2w), diffusion weighted, multi-echo T2*-weighted, and contrast enhanced dynamic multi-echo (DME) MR images of eighty-one patients with rectal cancer were used in the analysis. Four classical machine learning algorithms; adaptive boosting (ADA), linear and quadratic discriminant analysis and support vector machines, were trained for automatic segmentation of tumor and normal tissue using different combinations of the MR images as input, followed by semi-automatic morphological post-processing. Manual delineations from two experts served as ground truth. The Sørensen-Dice similarity coefficient (DICE) and mean symmetric surface distance (MSD) were used as performance metric in leave-one-out cross validation. Results: Using T2w images alone, ADA outperformed the other algorithms, yielding a median per patient DICE of 0.67 and MSD of 3.6 mm. The performance improved when functional images were added and was highest for models based on either T2w and DME images (DICE: 0.72, MSD: 2.7 mm) or all four MRI sequences (DICE: 0.72, MSD: 2.5 mm). Conclusion: Machine learning models using functional MRI, in particular DME, have the potential to improve automatic segmentation of rectal cancer relative to models using T2w MRI alone.

3.
Acta Oncol ; 61(2): 255-263, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34918621

RESUMO

BACKGROUND: Tumor delineation is time- and labor-intensive and prone to inter- and intraobserver variations. Magnetic resonance imaging (MRI) provides good soft tissue contrast, and functional MRI captures tissue properties that may be valuable for tumor delineation. We explored MRI-based automatic segmentation of rectal cancer using a deep learning (DL) approach. We first investigated potential improvements when including both anatomical T2-weighted (T2w) MRI and diffusion-weighted MR images (DWI). Secondly, we investigated generalizability by including a second, independent cohort. MATERIAL AND METHODS: Two cohorts of rectal cancer patients (C1 and C2) from different hospitals with 109 and 83 patients, respectively, were subject to 1.5 T MRI at baseline. T2w images were acquired for both cohorts and DWI (b-value of 500 s/mm2) for patients in C1. Tumors were manually delineated by three radiologists (two in C1, one in C2). A 2D U-Net was trained on T2w and T2w + DWI. Optimal parameters for image pre-processing and training were identified on C1 using five-fold cross-validation and patient Dice similarity coefficient (DSCp) as performance measure. The optimized models were evaluated on a C1 hold-out test set and the generalizability was investigated using C2. RESULTS: For cohort C1, the T2w model resulted in a median DSCp of 0.77 on the test set. Inclusion of DWI did not further improve the performance (DSCp 0.76). The T2w-based model trained on C1 and applied to C2 achieved a DSCp of 0.59. CONCLUSION: T2w MR-based DL models demonstrated high performance for automatic tumor segmentation, at the same level as published data on interobserver variation. DWI did not improve results further. Using DL models on unseen cohorts requires caution, and one cannot expect the same performance.


Assuntos
Imagem de Difusão por Ressonância Magnética , Neoplasias Retais , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Variações Dependentes do Observador , Neoplasias Retais/diagnóstico por imagem
4.
Phys Med Biol ; 66(6): 065012, 2021 03 04.
Artigo em Inglês | MEDLINE | ID: mdl-33666176

RESUMO

Target volume delineation is a vital but time-consuming and challenging part of radiotherapy, where the goal is to deliver sufficient dose to the target while reducing risks of side effects. For head and neck cancer (HNC) this is complicated by the complex anatomy of the head and neck region and the proximity of target volumes to organs at risk. The purpose of this study was to compare and evaluate conventional PET thresholding methods, six classical machine learning algorithms and a 2D U-Net convolutional neural network (CNN) for automatic gross tumor volume (GTV) segmentation of HNC in PET/CT images. For the latter two approaches the impact of single versus multimodality input on segmentation quality was also assessed. 197 patients were included in the study. The cohort was split into training and test sets (157 and 40 patients, respectively). Five-fold cross-validation was used on the training set for model comparison and selection. Manual GTV delineations represented the ground truth. Tresholding, classical machine learning and CNN segmentation models were ranked separately according to the cross-validation Sørensen-Dice similarity coefficient (Dice). PET thresholding gave a maximum mean Dice of 0.62, whereas classical machine learning resulted in maximum mean Dice scores of 0.24 (CT) and 0.66 (PET; PET/CT). CNN models obtained maximum mean Dice scores of 0.66 (CT), 0.68 (PET) and 0.74 (PET/CT). The difference in cross-validation Dice between multimodality PET/CT and single modality CNN models was significant (p ≤ 0.0001). The top-ranked PET/CT-based CNN model outperformed the best-performing thresholding and classical machine learning models, giving significantly better segmentations in terms of cross-validation and test set Dice, true positive rate, positive predictive value and surface distance-based metrics (p ≤ 0.0001). Thus, deep learning based on multimodality PET/CT input resulted in superior target coverage and less inclusion of surrounding normal tissue.


Assuntos
Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Humanos , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA