Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Int J Cancer ; 2024 Jul 11.
Artículo en Inglés | MEDLINE | ID: mdl-38989809

RESUMEN

The aim of this paper was to explore the role of artificial intelligence (AI) applied to ultrasound imaging in gynecology oncology. Web of Science, PubMed, and Scopus databases were searched. All studies were imported to RAYYAN QCRI software. The overall quality of the included studies was assessed using QUADAS-AI tool. Fifty studies were included, of these 37/50 (74.0%) on ovarian masses or ovarian cancer, 5/50 (10.0%) on endometrial cancer, 5/50 (10.0%) on cervical cancer, and 3/50 (6.0%) on other malignancies. Most studies were at high risk of bias for subject selection (i.e., sample size, source, or scanner model were not specified; data were not derived from open-source datasets; imaging preprocessing was not performed) and index test (AI models was not externally validated) and at low risk of bias for reference standard (i.e., the reference standard correctly classified the target condition) and workflow (i.e., the time between index test and reference standard was reasonable). Most studies presented machine learning models (33/50, 66.0%) for the diagnosis and histopathological correlation of ovarian masses, while others focused on automatic segmentation, reproducibility of radiomics features, improvement of image quality, prediction of therapy resistance, progression-free survival, and genetic mutation. The current evidence supports the role of AI as a complementary clinical and research tool in diagnosis, patient stratification, and prediction of histopathological correlation in gynecological malignancies. For example, the high performance of AI models to discriminate between benign and malignant ovarian masses or to predict their specific histology can improve the diagnostic accuracy of imaging methods.

3.
Phys Med ; 119: 103297, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38310680

RESUMEN

PURPOSE: Manual recontouring of targets and Organs At Risk (OARs) is a time-consuming and operator-dependent task. We explored the potential of Generative Adversarial Networks (GAN) to auto-segment the rectum, bladder and femoral heads on 0.35T MRIs to accelerate the online MRI-guided-Radiotherapy (MRIgRT) workflow. METHODS: 3D planning MRIs from 60 prostate cancer patients treated with 0.35T MR-Linac were collected. A 3D GAN architecture and its equivalent 2D version were trained, validated and tested on 40, 10 and 10 patients respectively. The volumetric Dice Similarity Coefficient (DSC) and 95th percentile Hausdorff Distance (HD95th) were computed against expert drawn ground-truth delineations. The networks were also validated on an independent external dataset of 16 patients. RESULTS: In the internal test set, the 3D and 2D GANs showed DSC/HD95th of 0.83/9.72 mm and 0.81/10.65 mm for the rectum, 0.92/5.91 mm and 0.85/15.72 mm for the bladder, and 0.94/3.62 mm and 0.90/9.49 mm for the femoral heads. In the external test set, the performance was 0.74/31.13 mm and 0.72/25.07 mm for the rectum, 0.92/9.46 mm and 0.88/11.28 mm for the bladder, and 0.89/7.00 mm and 0.88/10.06 mm for the femoral heads. The 3D and 2D GANs required on average 1.44 s and 6.59 s respectively to generate the OARs' volumetric segmentation for a single patient. CONCLUSIONS: The proposed 3D GAN auto-segments pelvic OARs with high accuracy on 0.35T, in both the internal and the external test sets, outperforming its 2D equivalent in both segmentation robustness and volume generation time.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Órganos en Riesgo , Masculino , Humanos , Órganos en Riesgo/diagnóstico por imagen , Tomografía Computarizada por Rayos X , Pelvis/diagnóstico por imagen , Imagen por Resonancia Magnética
4.
Artículo en Inglés | MEDLINE | ID: mdl-38405058

RESUMEN

Introduction: Advancements in MRI-guided radiotherapy (MRgRT) enable clinical parallel workflows (CPW) for online adaptive planning (oART), allowing medical physicists (MPs), physicians (MDs), and radiation therapists (RTTs) to perform their tasks simultaneously. This study evaluates the impact of this upgrade on the total treatment time by analyzing each step of the current 0.35T-MRgRT workflow. Methods: The time process of the workflow steps for 254 treatment fractions in 0.35 MRgRT was examined. Patients have been grouped based on disease site, breathing modality (BM) (BHI or FB), and fractionation (stereotactic body RT [SBRT] or standard fractionated long course [LC]). The time spent for the following workflow steps in Adaptive Treatment (ADP) was analyzed: Patient Setup Time (PSt), MRI Acquisition and Matching (MRt), MR Re-contouring Time (RCt), Re-Planning Time (RPt), Treatment Delivery Time (TDt). Also analyzed was the timing of treatments that followed a Simple workflow (SMP), without the online re-planning (PSt + MRt + TDt.). Results: The time analysis revealed that the ADP workflow (median: 34 min) is significantly (p < 0.05) longer than the SMP workflow (19 min). The time required for ADP treatments is significantly influenced by TDt, constituting 40 % of the total time. The oART steps (RCt + RPt) took 11 min (median), representing 27 % of the entire procedure. Overall, 79.2 % of oART fractions were completed in less than 45 min, and 30.6 % were completed in less than 30 min. Conclusion: This preliminary analysis, along with the comparative assessment against existing literature, underscores the potential of CPW to diminish the overall treatment duration in MRgRT-oART. Additionally, it suggests the potential for CPW to promote a more integrated multidisciplinary approach in the execution of oART.

5.
Front Oncol ; 14: 1294252, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38606108

RESUMEN

Purpose: Magnetic resonance imaging (MRI)-guided radiotherapy enables adaptive treatment plans based on daily anatomical changes and accurate organ visualization. However, the bias field artifact can compromise image quality, affecting diagnostic accuracy and quantitative analyses. This study aims to assess the impact of bias field correction on 0.35 T pelvis MRIs by evaluating clinical anatomy visualization and generative adversarial network (GAN) auto-segmentation performance. Materials and methods: 3D simulation MRIs from 60 prostate cancer patients treated on MR-Linac (0.35 T) were collected and preprocessed with the N4ITK algorithm for bias field correction. A 3D GAN architecture was trained, validated, and tested on 40, 10, and 10 patients, respectively, to auto-segment the organs at risk (OARs) rectum and bladder. The GAN was trained and evaluated either with the original or the bias-corrected MRIs. The Dice similarity coefficient (DSC) and 95th percentile Hausdorff distance (HD95th) were computed for the segmented volumes of each patient. The Wilcoxon signed-rank test assessed the statistical difference of the metrics within OARs, both with and without bias field correction. Five radiation oncologists blindly scored 22 randomly chosen patients in terms of overall image quality and visibility of boundaries (prostate, rectum, bladder, seminal vesicles) of the original and bias-corrected MRIs. Bennett's S score and Fleiss' kappa were used to assess the pairwise interrater agreement and the interrater agreement among all the observers, respectively. Results: In the test set, the GAN trained and evaluated on original and bias-corrected MRIs showed DSC/HD95th of 0.92/5.63 mm and 0.92/5.91 mm for the bladder and 0.84/10.61 mm and 0.83/9.71 mm for the rectum. No statistical differences in the distribution of the evaluation metrics were found neither for the bladder (DSC: p = 0.07; HD95th: p = 0.35) nor for the rectum (DSC: p = 0.32; HD95th: p = 0.63). From the clinical visual grading assessment, the bias-corrected MRI resulted mostly in either no change or an improvement of the image quality and visualization of the organs' boundaries compared with the original MRI. Conclusion: The bias field correction did not improve the anatomy visualization from a clinical point of view and the OARs' auto-segmentation outputs generated by the GAN.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA