Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Radiother Oncol ; : 110368, 2024 Jun 02.
Artigo em Inglês | MEDLINE | ID: mdl-38834153

RESUMO

BACKGROUND AND PURPOSE: To optimize our previously proposed TransRP, a model integrating CNN (convolutional neural network) and ViT (Vision Transformer) designed for recurrence-free survival prediction in oropharyngeal cancer and to extend its application to the prediction of multiple clinical outcomes, including locoregional control (LRC), Distant metastasis-free survival (DMFS) and overall survival (OS). MATERIALS AND METHODS: Data was collected from 400 patients (300 for training and 100 for testing) diagnosed with oropharyngeal squamous cell carcinoma (OPSCC) who underwent (chemo)radiotherapy at University Medical Center Groningen. Each patient's data comprised pre-treatment PET/CT scans, clinical parameters, and clinical outcome endpoints, namely LRC, DMFS and OS. The prediction performance of TransRP was compared with CNNs when inputting image data only. Additionally, three distinct methods (m1-3) of incorporating clinical predictors into TransRP training and one method (m4) that uses TransRP prediction as one parameter in a clinical Cox model were compared. RESULTS: TransRP achieved higher test C-index values of 0.61, 0.84 and 0.70 than CNNs for LRC, DMFS and OS, respectively. Furthermore, when incorporating TransRP's prediction into a clinical Cox model (m4), a higher C-index of 0.77 for OS was obtained. Compared with a clinical routine risk stratification model of OS, our model, using clinical variables, radiomics and TransRP prediction as predictors, achieved larger separations of survival curves between low, intermediate and high risk groups. CONCLUSION: TransRP outperformed CNN models for all endpoints. Combining clinical data and TransRP prediction in a Cox model achieved better OS prediction.

2.
Comput Biol Med ; 177: 108675, 2024 May 28.
Artigo em Inglês | MEDLINE | ID: mdl-38820779

RESUMO

BACKGROUND: The different tumor appearance of head and neck cancer across imaging modalities, scanners, and acquisition parameters accounts for the highly subjective nature of the manual tumor segmentation task. The variability of the manual contours is one of the causes of the lack of generalizability and the suboptimal performance of deep learning (DL) based tumor auto-segmentation models. Therefore, a DL-based method was developed that outputs predicted tumor probabilities for each PET-CT voxel in the form of a probability map instead of one fixed contour. The aim of this study was to show that DL-generated probability maps for tumor segmentation are clinically relevant, intuitive, and a more suitable solution to assist radiation oncologists in gross tumor volume segmentation on PET-CT images of head and neck cancer patients. METHOD: A graphical user interface (GUI) was designed, and a prototype was developed to allow the user to interact with tumor probability maps. Furthermore, a user study was conducted where nine experts in tumor delineation interacted with the interface prototype and its functionality. The participants' experience was assessed qualitatively and quantitatively. RESULTS: The interviews with radiation oncologists revealed their preference for using a rainbow colormap to visualize tumor probability maps during contouring, which they found intuitive. They also appreciated the slider feature, which facilitated interaction by allowing the selection of threshold values to create single contours for editing and use as a starting point. Feedback on the prototype highlighted its excellent usability and positive integration into clinical workflows. CONCLUSIONS: This study shows that DL-generated tumor probability maps are explainable, transparent, intuitive and a better alternative to the single output of tumor segmentation models.

3.
Fetal Diagn Ther ; 51(2): 145-153, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-37995667

RESUMO

INTRODUCTION: The aim of this study was to use computerized analysis of the grayscale spectrum (histogram) to provide an objective assessment of the echogenicity of the fetal bowel. Moreover, we investigated the role of histogram analysis in the prenatal prediction of postnatal outcomes in fetuses with echogenic bowel (fetal echogenic bowel [FEB]). METHODS: This is a single-center retrospective study including all fetuses with a diagnosis of echogenic bowel (FEB) in the mid-second trimester between 2015 and 2021. Ultrasound images were analyzed using ImageJ software. The mean of the grayscale histograms of the bowel, liver, and iliac/femur bone was obtained for each patient, and the ratio between these structures was used to overcome gain variations. We compared these values with those of a matched control group of singleton uncomplicated pregnancies and with a group of patients referred for FEB, where the FEB was not confirmed by the expert operator (FEB false-positive). RESULTS: There was a statistically significant difference between bowel/liver and bowel/bone histogram ratios between the FEB group and the control groups (p < 0.05). Mean ratio cutoffs were provided for the diagnosis of FEB. Among the patients with confirmed FEB, both ratios were not able to discriminate the cases with adverse outcomes. In contrast, the presence of dilated bowel or other markers was associated with an adverse outcome. CONCLUSIONS: Histogram analysis may refine the diagnosis of FEB and reduce the number of false-positive diagnoses. For the prediction of the fetal outcome, the presence of additional features is clinically more significant than the degree of bowel echogenicity.


Assuntos
Intestino Ecogênico , Gravidez , Feminino , Humanos , Estudos Retrospectivos , Ultrassonografia Pré-Natal/métodos , Feto/diagnóstico por imagem , Ultrassonografia
4.
Comput Methods Programs Biomed ; 244: 107939, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38008678

RESUMO

BACKGROUND AND OBJECTIVE: Recently, deep learning (DL) algorithms showed to be promising in predicting outcomes such as distant metastasis-free survival (DMFS) and overall survival (OS) using pre-treatment imaging in head and neck cancer. Gross Tumor Volume of the primary tumor (GTVp) segmentation is used as an additional channel in the input to DL algorithms to improve model performance. However, the binary segmentation mask of the GTVp directs the focus of the network to the defined tumor region only and uniformly. DL models trained for tumor segmentation have also been used to generate predicted tumor probability maps (TPM) where each pixel value corresponds to the degree of certainty of that pixel to be classified as tumor. The aim of this study was to explore the effect of using TPM as an extra input channel of CT- and PET-based DL prediction models for oropharyngeal cancer (OPC) patients in terms of local control (LC), regional control (RC), DMFS and OS. METHODS: We included 399 OPC patients from our institute that were treated with definitive (chemo)radiation. For each patient, CT and PET scans and GTVp contours, used for radiotherapy treatment planning, were collected. We first trained a previously developed 2.5D DL framework for tumor probability prediction by 5-fold cross validation using 131 patients. Then, a 3D ResNet18 was trained for outcome prediction using the 3D TPM as one of the possible inputs. The endpoints were LC, RC, DMFS, and OS. We performed 3-fold cross validation on 168 patients for each endpoint using different combinations of image modalities as input. The final prediction in the test set (100) was obtained by averaging the predictions of the 3-fold models. The C-index was used to evaluate the discriminative performance of the models. RESULTS: The models trained replacing the GTVp contours with the TPM achieved the highest C-indexes for LC (0.74) and RC (0.60) prediction. For OS, using the TPM or the GTVp as additional image modality resulted in comparable C-indexes (0.72 and 0.74). CONCLUSIONS: Adding predicted TPMs instead of GTVp contours as an additional input channel for DL-based outcome prediction models improved model performance for LC and RC.


Assuntos
Aprendizado Profundo , Neoplasias de Cabeça e Pescoço , Neoplasias Orofaríngeas , Humanos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Neoplasias Orofaríngeas/diagnóstico por imagem , Prognóstico
5.
Phys Med Biol ; 68(5)2023 02 23.
Artigo em Inglês | MEDLINE | ID: mdl-36749988

RESUMO

Objective. Tumor segmentation is a fundamental step for radiotherapy treatment planning. To define an accurate segmentation of the primary tumor (GTVp) of oropharyngeal cancer patients (OPC) each image volume is explored slice-by-slice from different orientations on different image modalities. However, the manual fixed boundary of segmentation neglects the spatial uncertainty known to occur in tumor delineation. This study proposes a novel deep learning-based method that generates probability maps which capture the model uncertainty in the segmentation task.Approach. We included 138 OPC patients treated with (chemo)radiation in our institute. Sequences of 3 consecutive 2D slices of concatenated FDG-PET/CT images and GTVp contours were used as input. Our framework exploits inter and intra-slice context using attention mechanisms and bi-directional long short term memory (Bi-LSTM). Each slice resulted in three predictions that were averaged. A 3-fold cross validation was performed on sequences extracted from the axial, sagittal, and coronal plane. 3D volumes were reconstructed and single- and multi-view ensembling were performed to obtain final results. The output is a tumor probability map determined by averaging multiple predictions.Main Results. Model performance was assessed on 25 patients at different probability thresholds. Predictions were the closest to the GTVp at a threshold of 0.9 (mean surface DSC of 0.81, median HD95of 3.906 mm).Significance. The promising results of the proposed method show that is it possible to offer the probability maps to radiation oncologists to guide them in a in a slice-by-slice adaptive GTVp segmentation.


Assuntos
Aprendizado Profundo , Neoplasias de Cabeça e Pescoço , Neoplasias Orofaríngeas , Humanos , Fluordesoxiglucose F18 , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Tomografia Computadorizada por Raios X/métodos , Probabilidade , Processamento de Imagem Assistida por Computador/métodos
6.
Semin Radiat Oncol ; 32(4): 415-420, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-36202443

RESUMO

Application of Artificial Intelligence (AI) tools has recently gained interest in the fields of medical imaging and radiotherapy. Even though there have been many papers published in these domains in the last few years, clinical assessment of the proposed AI methods is limited due to the lack of standardized protocols that can be used to validate the performance of the developed tools. Moreover, each stakeholder uses their own methods, tools, and evaluation criteria. Communication between different stakeholders is limited or absent, which makes it hard to easily exchange models between different clinics. These issues are not limited to radiotherapy but exist in every AI application domain. To deal with these issues, methods like the Machine Learning Canvas, Datasets for Datasheets, and Model cards have been developed. They aim to provide information of the whole creation pipeline of AI solutions, of the datasets used to develop AI, along with their biases, as well as to facilitate easier collaboration/communication between different stakeholders and facilitate the clinical introduction of AI. This work introduces the concepts of these 3 open-source solutions including the author's experiences applying them to AI applications for radiotherapy.


Assuntos
Inteligência Artificial , Radioterapia (Especialidade) , Humanos , Aprendizado de Máquina , Padrões de Referência
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...