Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-39147208

RESUMEN

BACKGROUND AND PURPOSE: Conventional normal tissue complication probability (NTCP) models for head and neck cancer (HNC) patients are typically based on single-value variables, which for radiation-induced xerostomia are baseline xerostomia and mean salivary gland doses. This study aims to improve the prediction of late xerostomia by utilizing 3D information from radiation dose distributions, CT imaging, organ-at-risk segmentations, and clinical variables with deep learning (DL). MATERIALS AND METHODS: An international cohort of 1208 HNC patients from two institutes was used to train and twice validate DL models (DCNN, EfficientNet-v2, and ResNet) with 3D dose distribution, CT scan, organ-at-risk segmentations, baseline xerostomia score, sex, and age as input. The NTCP endpoint was moderate-to-severe xerostomia 12 months post-radiotherapy. The DL models' prediction performance was compared to a reference model: a recently published xerostomia NTCP model that used baseline xerostomia score and mean salivary gland doses as input. Attention maps were created to visualize the focus regions of the DL predictions. Transfer learning was conducted to improve the DL model performance on the external validation set. RESULTS: All DL-based NTCP models showed better performance (AUCtest=0.78 - 0.79) than the reference NTCP model (AUCtest=0.74) in the independent test. Attention maps showed that the DL model focused on the major salivary glands, particularly the stem cell-rich region of the parotid glands. DL models obtained lower external validation performance (AUCexternal=0.63) than the reference model (AUCexternal=0.66). After transfer learning on a small external subset, the DL model (AUCtl, external=0.66) performed better than the reference model (AUCtl, external=0.64). CONCLUSION: DL-based NTCP models performed better than the reference model when validated in data from the same institute. Improved performance in the external dataset was achieved with transfer learning, demonstrating the need for multicenter training data to realize generalizable DL-based NTCP models.

5.
Radiother Oncol ; 197: 110368, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38834153

RESUMEN

BACKGROUND AND PURPOSE: To optimize our previously proposed TransRP, a model integrating CNN (convolutional neural network) and ViT (Vision Transformer) designed for recurrence-free survival prediction in oropharyngeal cancer and to extend its application to the prediction of multiple clinical outcomes, including locoregional control (LRC), Distant metastasis-free survival (DMFS) and overall survival (OS). MATERIALS AND METHODS: Data was collected from 400 patients (300 for training and 100 for testing) diagnosed with oropharyngeal squamous cell carcinoma (OPSCC) who underwent (chemo)radiotherapy at University Medical Center Groningen. Each patient's data comprised pre-treatment PET/CT scans, clinical parameters, and clinical outcome endpoints, namely LRC, DMFS and OS. The prediction performance of TransRP was compared with CNNs when inputting image data only. Additionally, three distinct methods (m1-3) of incorporating clinical predictors into TransRP training and one method (m4) that uses TransRP prediction as one parameter in a clinical Cox model were compared. RESULTS: TransRP achieved higher test C-index values of 0.61, 0.84 and 0.70 than CNNs for LRC, DMFS and OS, respectively. Furthermore, when incorporating TransRP's prediction into a clinical Cox model (m4), a higher C-index of 0.77 for OS was obtained. Compared with a clinical routine risk stratification model of OS, our model, using clinical variables, radiomics and TransRP prediction as predictors, achieved larger separations of survival curves between low, intermediate and high risk groups. CONCLUSION: TransRP outperformed CNN models for all endpoints. Combining clinical data and TransRP prediction in a Cox model achieved better OS prediction.


Asunto(s)
Neoplasias Orofaríngeas , Tomografía Computarizada por Tomografía de Emisión de Positrones , Humanos , Neoplasias Orofaríngeas/mortalidad , Neoplasias Orofaríngeas/diagnóstico por imagen , Neoplasias Orofaríngeas/patología , Neoplasias Orofaríngeas/radioterapia , Neoplasias Orofaríngeas/terapia , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Masculino , Femenino , Persona de Mediana Edad , Anciano , Redes Neurales de la Computación , Adulto
6.
Comput Biol Med ; 177: 108675, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38820779

RESUMEN

BACKGROUND: The different tumor appearance of head and neck cancer across imaging modalities, scanners, and acquisition parameters accounts for the highly subjective nature of the manual tumor segmentation task. The variability of the manual contours is one of the causes of the lack of generalizability and the suboptimal performance of deep learning (DL) based tumor auto-segmentation models. Therefore, a DL-based method was developed that outputs predicted tumor probabilities for each PET-CT voxel in the form of a probability map instead of one fixed contour. The aim of this study was to show that DL-generated probability maps for tumor segmentation are clinically relevant, intuitive, and a more suitable solution to assist radiation oncologists in gross tumor volume segmentation on PET-CT images of head and neck cancer patients. METHOD: A graphical user interface (GUI) was designed, and a prototype was developed to allow the user to interact with tumor probability maps. Furthermore, a user study was conducted where nine experts in tumor delineation interacted with the interface prototype and its functionality. The participants' experience was assessed qualitatively and quantitatively. RESULTS: The interviews with radiation oncologists revealed their preference for using a rainbow colormap to visualize tumor probability maps during contouring, which they found intuitive. They also appreciated the slider feature, which facilitated interaction by allowing the selection of threshold values to create single contours for editing and use as a starting point. Feedback on the prototype highlighted its excellent usability and positive integration into clinical workflows. CONCLUSIONS: This study shows that DL-generated tumor probability maps are explainable, transparent, intuitive and a better alternative to the single output of tumor segmentation models.


Asunto(s)
Aprendizaje Profundo , Neoplasias de Cabeza y Cuello , Humanos , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Interfaz Usuario-Computador , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos
7.
Eur Radiol Exp ; 8(1): 63, 2024 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-38764066

RESUMEN

BACKGROUND: Emphysema influences the appearance of lung tissue in computed tomography (CT). We evaluated whether this affects lung nodule detection by artificial intelligence (AI) and human readers (HR). METHODS: Individuals were selected from the "Lifelines" cohort who had undergone low-dose chest CT. Nodules in individuals without emphysema were matched to similar-sized nodules in individuals with at least moderate emphysema. AI results for nodular findings of 30-100 mm3 and 101-300 mm3 were compared to those of HR; two expert radiologists blindly reviewed discrepancies. Sensitivity and false positives (FPs)/scan were compared for emphysema and non-emphysema groups. RESULTS: Thirty-nine participants with and 82 without emphysema were included (n = 121, aged 61 ± 8 years (mean ± standard deviation), 58/121 males (47.9%)). AI and HR detected 196 and 206 nodular findings, respectively, yielding 109 concordant nodules and 184 discrepancies, including 118 true nodules. For AI, sensitivity was 0.68 (95% confidence interval 0.57-0.77) in emphysema versus 0.71 (0.62-0.78) in non-emphysema, with FPs/scan 0.51 and 0.22, respectively (p = 0.028). For HR, sensitivity was 0.76 (0.65-0.84) and 0.80 (0.72-0.86), with FPs/scan of 0.15 and 0.27 (p = 0.230). Overall sensitivity was slightly higher for HR than for AI, but this difference disappeared after the exclusion of benign lymph nodes. FPs/scan were higher for AI in emphysema than in non-emphysema (p = 0.028), while FPs/scan for HR were higher than AI for 30-100 mm3 nodules in non-emphysema (p = 0.009). CONCLUSIONS: AI resulted in more FPs/scan in emphysema compared to non-emphysema, a difference not observed for HR. RELEVANCE STATEMENT: In the creation of a benchmark dataset to validate AI software for lung nodule detection, the inclusion of emphysema cases is important due to the additional number of FPs. KEY POINTS: • The sensitivity of nodule detection by AI was similar in emphysema and non-emphysema. • AI had more FPs/scan in emphysema compared to non-emphysema. • Sensitivity and FPs/scan by the human reader were comparable for emphysema and non-emphysema. • Emphysema and non-emphysema representation in benchmark dataset is important for validating AI.


Asunto(s)
Inteligencia Artificial , Enfisema Pulmonar , Tomografía Computarizada por Rayos X , Humanos , Masculino , Persona de Mediana Edad , Femenino , Tomografía Computarizada por Rayos X/métodos , Enfisema Pulmonar/diagnóstico por imagen , Programas Informáticos , Sensibilidad y Especificidad , Neoplasias Pulmonares/diagnóstico por imagen , Anciano , Dosis de Radiación , Nódulo Pulmonar Solitario/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador/métodos
8.
Insights Imaging ; 15(1): 54, 2024 Feb 27.
Artículo en Inglés | MEDLINE | ID: mdl-38411750

RESUMEN

OBJECTIVE: To systematically review radiomic feature reproducibility and model validation strategies in recent studies dealing with CT and MRI radiomics of bone and soft-tissue sarcomas, thus updating a previous version of this review which included studies published up to 2020. METHODS: A literature search was conducted on EMBASE and PubMed databases for papers published between January 2021 and March 2023. Data regarding radiomic feature reproducibility and model validation strategies were extracted and analyzed. RESULTS: Out of 201 identified papers, 55 were included. They dealt with radiomics of bone (n = 23) or soft-tissue (n = 32) tumors. Thirty-two (out of 54 employing manual or semiautomatic segmentation, 59%) studies included a feature reproducibility analysis. Reproducibility was assessed based on intra/interobserver segmentation variability in 30 (55%) and geometrical transformations of the region of interest in 2 (4%) studies. At least one machine learning validation technique was used for model development in 34 (62%) papers, and K-fold cross-validation was employed most frequently. A clinical validation of the model was reported in 38 (69%) papers. It was performed using a separate dataset from the primary institution (internal test) in 22 (40%), an independent dataset from another institution (external test) in 14 (25%) and both in 2 (4%) studies. CONCLUSIONS: Compared to papers published up to 2020, a clear improvement was noted with almost double publications reporting methodological aspects related to reproducibility and validation. Larger multicenter investigations including external clinical validation and the publication of databases in open-access repositories could further improve methodology and bring radiomics from a research area to the clinical stage. CRITICAL RELEVANCE STATEMENT: An improvement in feature reproducibility and model validation strategies has been shown in this updated systematic review on radiomics of bone and soft-tissue sarcomas, highlighting efforts to enhance methodology and bring radiomics from a research area to the clinical stage. KEY POINTS: • 2021-2023 radiomic studies on CT and MRI of musculoskeletal sarcomas were reviewed. • Feature reproducibility was assessed in more than half (59%) of the studies. • Model clinical validation was performed in 69% of the studies. • Internal (44%) and/or external (29%) test datasets were employed for clinical validation.

9.
Insights Imaging ; 15(1): 15, 2024 Jan 17.
Artículo en Inglés | MEDLINE | ID: mdl-38228800

RESUMEN

OBJECTIVES: To present a framework to develop and implement a fast-track artificial intelligence (AI) curriculum into an existing radiology residency program, with the potential to prepare a new generation of AI conscious radiologists. METHODS: The AI-curriculum framework comprises five sequential steps: (1) forming a team of AI experts, (2) assessing the residents' knowledge level and needs, (3) defining learning objectives, (4) matching these objectives with effective teaching strategies, and finally (5) implementing and evaluating the pilot. Following these steps, a multidisciplinary team of AI engineers, radiologists, and radiology residents designed a 3-day program, including didactic lectures, hands-on laboratory sessions, and group discussions with experts to enhance AI understanding. Pre- and post-curriculum surveys were conducted to assess participants' expectations and progress and were analyzed using a Wilcoxon rank-sum test. RESULTS: There was 100% response rate to the pre- and post-curriculum survey (17 and 12 respondents, respectively). Participants' confidence in their knowledge and understanding of AI in radiology significantly increased after completing the program (pre-curriculum means 3.25 ± 1.48 (SD), post-curriculum means 6.5 ± 0.90 (SD), p-value = 0.002). A total of 75% confirmed that the course addressed topics that were applicable to their work in radiology. Lectures on the fundamentals of AI and group discussions with experts were deemed most useful. CONCLUSION: Designing an AI curriculum for radiology residents and implementing it into a radiology residency program is feasible using the framework presented. The 3-day AI curriculum effectively increased participants' perception of knowledge and skills about AI in radiology and can serve as a starting point for further customization. CRITICAL RELEVANCE STATEMENT: The framework provides guidance for developing and implementing an AI curriculum in radiology residency programs, educating residents on the application of AI in radiology and ultimately contributing to future high-quality, safe, and effective patient care. KEY POINTS: • AI education is necessary to prepare a new generation of AI-conscious radiologists. • The AI curriculum increased participants' perception of AI knowledge and skills in radiology. • This five-step framework can assist integrating AI education into radiology residency programs.

10.
Comput Biol Med ; 169: 107871, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38154157

RESUMEN

BACKGROUND: During lung cancer screening, indeterminate pulmonary nodules (IPNs) are a frequent finding. We aim to predict whether IPNs are resolving or non-resolving to reduce follow-up examinations, using machine learning (ML) models. We incorporated dedicated techniques to enhance prediction explainability. METHODS: In total, 724 IPNs (size 50-500 mm3, 575 participants) from the Dutch-Belgian Randomized Lung Cancer Screening Trial were used. We implemented six ML models and 14 factors to predict nodule disappearance. Random search was applied to determine the optimal hyperparameters on the training set (579 nodules). ML models were trained using 5-fold cross-validation and tested on the test set (145 nodules). Model predictions were evaluated by utilizing the recall, precision, F1 score, and the area under the receiver operating characteristic curve (AUC). The best-performing model was used for three feature importance techniques: mean decrease in impurity (MDI), permutation feature importance (PFI), and SHAPley Additive exPlanations (SHAP). RESULTS: The random forest model outperformed the other ML models with an AUC of 0.865. This model achieved a recall of 0.646, a precision of 0.816, and an F1 score of 0.721. The evaluation of feature importance achieved consistent ranking across all three methods for the most crucial factors. The MDI, PFI, and SHAP methods highlighted volume, maximum diameter, and minimum diameter as the top three factors. However, the remaining factors revealed discrepant ranking across methods. CONCLUSION: ML models effectively predict IPN disappearance using participant demographics and nodule characteristics. Explainable techniques can assist clinicians in developing understandable preliminary assessments.


Asunto(s)
Neoplasias Pulmonares , Humanos , Detección Precoz del Cáncer , Aprendizaje Automático , Curva ROC , Ensayos Clínicos Controlados Aleatorios como Asunto
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...