Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
2.
Int J Cancer ; 2024 Jul 11.
Artigo em Inglês | MEDLINE | ID: mdl-38989809

RESUMO

The aim of this paper was to explore the role of artificial intelligence (AI) applied to ultrasound imaging in gynecology oncology. Web of Science, PubMed, and Scopus databases were searched. All studies were imported to RAYYAN QCRI software. The overall quality of the included studies was assessed using QUADAS-AI tool. Fifty studies were included, of these 37/50 (74.0%) on ovarian masses or ovarian cancer, 5/50 (10.0%) on endometrial cancer, 5/50 (10.0%) on cervical cancer, and 3/50 (6.0%) on other malignancies. Most studies were at high risk of bias for subject selection (i.e., sample size, source, or scanner model were not specified; data were not derived from open-source datasets; imaging preprocessing was not performed) and index test (AI models was not externally validated) and at low risk of bias for reference standard (i.e., the reference standard correctly classified the target condition) and workflow (i.e., the time between index test and reference standard was reasonable). Most studies presented machine learning models (33/50, 66.0%) for the diagnosis and histopathological correlation of ovarian masses, while others focused on automatic segmentation, reproducibility of radiomics features, improvement of image quality, prediction of therapy resistance, progression-free survival, and genetic mutation. The current evidence supports the role of AI as a complementary clinical and research tool in diagnosis, patient stratification, and prediction of histopathological correlation in gynecological malignancies. For example, the high performance of AI models to discriminate between benign and malignant ovarian masses or to predict their specific histology can improve the diagnostic accuracy of imaging methods.

3.
Front Oncol ; 14: 1294252, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38606108

RESUMO

Purpose: Magnetic resonance imaging (MRI)-guided radiotherapy enables adaptive treatment plans based on daily anatomical changes and accurate organ visualization. However, the bias field artifact can compromise image quality, affecting diagnostic accuracy and quantitative analyses. This study aims to assess the impact of bias field correction on 0.35 T pelvis MRIs by evaluating clinical anatomy visualization and generative adversarial network (GAN) auto-segmentation performance. Materials and methods: 3D simulation MRIs from 60 prostate cancer patients treated on MR-Linac (0.35 T) were collected and preprocessed with the N4ITK algorithm for bias field correction. A 3D GAN architecture was trained, validated, and tested on 40, 10, and 10 patients, respectively, to auto-segment the organs at risk (OARs) rectum and bladder. The GAN was trained and evaluated either with the original or the bias-corrected MRIs. The Dice similarity coefficient (DSC) and 95th percentile Hausdorff distance (HD95th) were computed for the segmented volumes of each patient. The Wilcoxon signed-rank test assessed the statistical difference of the metrics within OARs, both with and without bias field correction. Five radiation oncologists blindly scored 22 randomly chosen patients in terms of overall image quality and visibility of boundaries (prostate, rectum, bladder, seminal vesicles) of the original and bias-corrected MRIs. Bennett's S score and Fleiss' kappa were used to assess the pairwise interrater agreement and the interrater agreement among all the observers, respectively. Results: In the test set, the GAN trained and evaluated on original and bias-corrected MRIs showed DSC/HD95th of 0.92/5.63 mm and 0.92/5.91 mm for the bladder and 0.84/10.61 mm and 0.83/9.71 mm for the rectum. No statistical differences in the distribution of the evaluation metrics were found neither for the bladder (DSC: p = 0.07; HD95th: p = 0.35) nor for the rectum (DSC: p = 0.32; HD95th: p = 0.63). From the clinical visual grading assessment, the bias-corrected MRI resulted mostly in either no change or an improvement of the image quality and visualization of the organs' boundaries compared with the original MRI. Conclusion: The bias field correction did not improve the anatomy visualization from a clinical point of view and the OARs' auto-segmentation outputs generated by the GAN.

4.
Phys Med ; 119: 103297, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38310680

RESUMO

PURPOSE: Manual recontouring of targets and Organs At Risk (OARs) is a time-consuming and operator-dependent task. We explored the potential of Generative Adversarial Networks (GAN) to auto-segment the rectum, bladder and femoral heads on 0.35T MRIs to accelerate the online MRI-guided-Radiotherapy (MRIgRT) workflow. METHODS: 3D planning MRIs from 60 prostate cancer patients treated with 0.35T MR-Linac were collected. A 3D GAN architecture and its equivalent 2D version were trained, validated and tested on 40, 10 and 10 patients respectively. The volumetric Dice Similarity Coefficient (DSC) and 95th percentile Hausdorff Distance (HD95th) were computed against expert drawn ground-truth delineations. The networks were also validated on an independent external dataset of 16 patients. RESULTS: In the internal test set, the 3D and 2D GANs showed DSC/HD95th of 0.83/9.72 mm and 0.81/10.65 mm for the rectum, 0.92/5.91 mm and 0.85/15.72 mm for the bladder, and 0.94/3.62 mm and 0.90/9.49 mm for the femoral heads. In the external test set, the performance was 0.74/31.13 mm and 0.72/25.07 mm for the rectum, 0.92/9.46 mm and 0.88/11.28 mm for the bladder, and 0.89/7.00 mm and 0.88/10.06 mm for the femoral heads. The 3D and 2D GANs required on average 1.44 s and 6.59 s respectively to generate the OARs' volumetric segmentation for a single patient. CONCLUSIONS: The proposed 3D GAN auto-segments pelvic OARs with high accuracy on 0.35T, in both the internal and the external test sets, outperforming its 2D equivalent in both segmentation robustness and volume generation time.


Assuntos
Processamento de Imagem Assistida por Computador , Órgãos em Risco , Masculino , Humanos , Órgãos em Risco/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Pelve/diagnóstico por imagem , Imageamento por Ressonância Magnética
5.
Artigo em Inglês | MEDLINE | ID: mdl-38405058

RESUMO

Introduction: Advancements in MRI-guided radiotherapy (MRgRT) enable clinical parallel workflows (CPW) for online adaptive planning (oART), allowing medical physicists (MPs), physicians (MDs), and radiation therapists (RTTs) to perform their tasks simultaneously. This study evaluates the impact of this upgrade on the total treatment time by analyzing each step of the current 0.35T-MRgRT workflow. Methods: The time process of the workflow steps for 254 treatment fractions in 0.35 MRgRT was examined. Patients have been grouped based on disease site, breathing modality (BM) (BHI or FB), and fractionation (stereotactic body RT [SBRT] or standard fractionated long course [LC]). The time spent for the following workflow steps in Adaptive Treatment (ADP) was analyzed: Patient Setup Time (PSt), MRI Acquisition and Matching (MRt), MR Re-contouring Time (RCt), Re-Planning Time (RPt), Treatment Delivery Time (TDt). Also analyzed was the timing of treatments that followed a Simple workflow (SMP), without the online re-planning (PSt + MRt + TDt.). Results: The time analysis revealed that the ADP workflow (median: 34 min) is significantly (p < 0.05) longer than the SMP workflow (19 min). The time required for ADP treatments is significantly influenced by TDt, constituting 40 % of the total time. The oART steps (RCt + RPt) took 11 min (median), representing 27 % of the entire procedure. Overall, 79.2 % of oART fractions were completed in less than 45 min, and 30.6 % were completed in less than 30 min. Conclusion: This preliminary analysis, along with the comparative assessment against existing literature, underscores the potential of CPW to diminish the overall treatment duration in MRgRT-oART. Additionally, it suggests the potential for CPW to promote a more integrated multidisciplinary approach in the execution of oART.

6.
Phys Imaging Radiat Oncol ; 28: 100498, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37928618

RESUMO

Background and purpose: Automation is desirable for organ segmentation in radiotherapy. This study compared deep learning methods for auto-segmentation of organs-at-risk (OARs) and clinical target volume (CTV) in prostate cancer patients undergoing fractionated magnetic resonance (MR)-guided adaptive radiation therapy. Models predicting dense displacement fields (DDFMs) between planning and fraction images were compared to patient-specific (PSM) and baseline (BM) segmentation models. Materials and methods: A dataset of 92 patients with planning and fraction MR images (MRIs) from two institutions were used. DDFMs were trained to predict dense displacement fields (DDFs) between the planning and fraction images, which were subsequently used to propagate the planning contours of the bladder, rectum, and CTV to the daily MRI. The training was performed either with true planning-fraction image pairs or with planning images and their counterparts deformed by known DDFs. The BMs were trained on 53 planning images, while to generate PSMs, the BMs were fine-tuned using the planning image of a given single patient. The evaluation included Dice similarity coefficient (DSC), the average (HDavg) and the 95th percentile (HD95) Hausdorff distance (HD). Results: The DDFMs with DSCs for bladder/rectum of 0.76/0.76 performed worse than PSMs (0.91/0.90) and BMs (0.89/0.88). The same trend was observed for HDs. For CTV, DDFM and PSM performed similarly yielding DSCs of 0.87 and 0.84, respectively. Conclusions: DDFMs were found suitable for CTV delineation after rigid alignment. However, for OARs they were outperformed by PSMs, as they predicted only limited deformations even in the presence of substantial anatomical changes.

7.
Med Phys ; 50(3): 1573-1585, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36259384

RESUMO

BACKGROUND: Online adaptive radiation therapy (RT) using hybrid magnetic resonance linear accelerators (MR-Linacs) can administer a tailored radiation dose at each treatment fraction. Daily MR imaging followed by organ and target segmentation adjustments allow to capture anatomical changes, improve target volume coverage, and reduce the risk of side effects. The introduction of automatic segmentation techniques could help to further improve the online adaptive workflow by shortening the re-contouring time and reducing intra- and inter-observer variability. In fractionated RT, prior knowledge, such as planning images and manual expert contours, is usually available before irradiation, but not used by current artificial intelligence-based autocontouring approaches. PURPOSE: The goal of this study was to train convolutional neural networks (CNNs) for automatic segmentation of bladder, rectum (organs at risk, OARs), and clinical target volume (CTV) for prostate cancer patients treated at 0.35 T MR-Linacs. Furthermore, we tested the CNNs generalization on data from independent facilities and compared them with the MR-Linac treatment planning system (TPS) propagated structures currently used in clinics. Finally, expert planning delineations were utilized for patient- (PS) and facility-specific (FS) transfer learning to improve auto-segmentation of CTV and OARs on fraction images. METHODS: In this study, data from fractionated treatments at 0.35 T MR-Linacs were leveraged to develop a 3D U-Net-based automatic segmentation. Cohort C1 had 73 planning images and cohort C2 had 19 planning and 240 fraction images. The baseline models (BMs) were trained solely on C1 planning data using 53 MRIs for training and 10 for validation. To assess their accuracy, the models were tested on three data subsets: (i) 10 C1 planning images not used for training, (ii) 19 C2 planning, and (iii) 240 C2 fraction images. BMs also served as a starting point for FS and PS transfer learning, where the planning images from C2 were used for network parameter fine tuning. The segmentation output of the different trained models was compared against expert ground truth by means of geometric metrics. Moreover, a trained physician graded the network segmentations as well as the segmentations propagated by the clinical TPS. RESULTS: The BMs showed dice similarity coefficients (DSC) of 0.88(4) and 0.93(3) for the rectum and the bladder, respectively, independent of the facility. CTV segmentation with the BM was the best for intermediate- and high-risk cancer patients from C1 with DSC=0.84(5) and worst for C2 with DSC=0.74(7). The PS transfer learning brought a significant improvement in the CTV segmentation, yielding DSC=0.72(4) for post-prostatectomy and low-risk patients and DSC=0.88(5) for intermediate- and high-risk patients. The FS training did not improve the segmentation accuracy considerably. The physician's assessment of the TPS-propagated versus network-generated structures showed a clear advantage of the latter. CONCLUSIONS: The obtained results showed that the presented segmentation technique has potential to improve automatic segmentation for MR-guided RT.


Assuntos
Inteligência Artificial , Neoplasias da Próstata , Masculino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X , Planejamento da Radioterapia Assistida por Computador/métodos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/radioterapia , Órgãos em Risco/efeitos da radiação , Aprendizado de Máquina
8.
Radiother Oncol ; 176: 31-38, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36063982

RESUMO

INTRODUCTION: This study aims to apply a conditional Generative Adversarial Network (cGAN) to generate synthetic Computed Tomography (sCT) from 0.35 Tesla Magnetic Resonance (MR) images of the thorax. METHODS: Sixty patients treated for lung lesions were enrolled and divided into training (32), validation (8), internal (10,TA) and external (10,TB) test set. Image accuracy of generated sCT was evaluated computing the mean absolute (MAE) and mean error (ME) with respect the original CT. Three treatment plans were calculated for each patient considering MRI as reference image: original CT, sCT (pure sCT) and sCT with GTV density override (hybrid sCT) were used as Electron Density (ED) map. Dose accuracy was evaluated comparing treatment plans in terms of gamma analysis and Dose Volume Histogram (DVH) parameters. RESULTS: No significant difference was observed between the test sets for image and dose accuracy parameters. Considering the whole test cohort, a MAE of 54.9 ± 10.5 HU and a ME of 4.4 ± 7.4 HU was obtained. Mean gamma passing rates for 2%/2mm, and 3%/3mm tolerance criteria were 95.5 ± 5.9% and 98.2 ± 4.1% for pure sCT, 96.1 ± 5.1% and 98.5 ± 3.9% for hybrid sCT: the difference between the two approaches was significant (p = 0.01). As regards DVH analysis, differences in target parameters estimation were found to be within 5% using hybrid approach and 20% using pure sCT. CONCLUSION: The DL algorithm here presented can generate sCT images in the thorax with good image and dose accuracy, especially when the hybrid approach is used. The algorithm does not suffer from inter-scanner variability, making feasible the implementation of MR-only workflows for palliative treatments.


Assuntos
Aprendizado Profundo , Radioterapia Guiada por Imagem , Humanos , Tomografia Computadorizada por Raios X/métodos , Imageamento por Ressonância Magnética/métodos , Tórax , Pulmão , Planejamento da Radioterapia Assistida por Computador/métodos , Dosagem Radioterapêutica
9.
Prostate Cancer Prostatic Dis ; 25(2): 359-362, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34480083

RESUMO

BACKGROUND: In current precision prostate cancer (PCa) surgery era the identification of the best patients candidate for prostate biopsy still remains an open issue. The aim of this study was to evaluate if the prostate target biopsy (TB) outcomes could be predicted by using artificial intelligence approach based on a set of clinical pre-biopsy. METHODS: Pre-biopsy characteristics in terms of PSA, PSA density, digital rectal examination (DRE), previous prostate biopsies, number of suspicious lesions at mp-MRI, lesion volume, lesion location, and Pi-Rads score were extracted from our prospectively maintained TB database from March 2014 to December 2019. Our approach is based on Fuzzy logic and associative rules mining, with the aim to predict TB outcomes. RESULTS: A total of 1448 patients were included. Using the Frequent-Pattern growth algorithm we extracted 875 rules and used to build the fuzzy classifier. 963 subjects were classified whereas for the remaining 484 subjects were not classified since no rules matched with their input variables. Analyzing the classified subjects we obtained a specificity of 59.2% and sensitivity of 90.8% with a negative and the positive predictive values of 81.3% and 76.6%, respectively. In particular, focusing on ISUP ≥ 3 PCa, our model is able to correctly predict the biopsy outcomes in 98.1% of the cases. CONCLUSIONS: In this study we demonstrated that the possibility to look at several pre-biopsy variables simultaneously with artificial intelligence algorithms can improve the prediction of TB outcomes, outclassing the performance of PSA, its derivates and MRI alone.


Assuntos
Próstata , Neoplasias da Próstata , Inteligência Artificial , Biópsia , Lógica Fuzzy , Humanos , Biópsia Guiada por Imagem , Imageamento por Ressonância Magnética , Masculino , Próstata/diagnóstico por imagem , Próstata/patologia , Antígeno Prostático Específico , Neoplasias da Próstata/diagnóstico , Neoplasias da Próstata/patologia , Estudos Retrospectivos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA