Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 183
Filtrar
1.
Radiother Oncol ; : 110567, 2024 Oct 05.
Artículo en Inglés | MEDLINE | ID: mdl-39374675

RESUMEN

BACKGROUND AND PURPOSE: This study aimed at training and validating a multi-institutional deep learning (DL) auto segmentation model for nodal clinical target volume (CTVn) in high-risk breast cancer (BC) patients with both training and validation dataset created with multi-institutional participation, with the overall aim of national clinical implementation in Denmark. MATERIALS AND METHODS: A gold standard (GS) dataset and a high-quality training dataset were created by 21 BC delineation experts from all radiotherapy centres in Denmark. The delineations were created according to ESTRO consensus delineation guidelines. Four models were trained: One per laterality and extension of CTVn internal mammary nodes. The DL models were tested quantitatively in their own test-set and in relation to interobserver variation (IOV) in the GS dataset with geometrical metrics, such as the Dice Similarity Coefficient (DSC). A blinded qualitative evaluation was conducted with a national board, presented to both DL and manual delineations. RESULTS: A median DSC > 0.7 was found for all, except the CTVn interpectoral node in one of the models. In the qualitative evaluation 'no corrections needed' were acquired for 297 (36 %) in the DL structures and 286 (34 %) for manual delineations. A higher rate of 'major corrections' and 'easier to start from scratch' was found in the manual delineations. The models performed within the IOV of an expert group, with two exceptions. CONCLUSION: DL models were developed on a national consensus cohort and performed on par with the IOV between BC experts and had a comparable or higher clinical acceptance than expert manual delineations.

2.
J Appl Clin Med Phys ; : e14513, 2024 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-39284283

RESUMEN

PURPOSE: We have built a novel AI-driven QA method called AutoConfidence (ACo), to estimate segmentation confidence on a per-voxel basis without gold standard segmentations, enabling robust, efficient review of automated segmentation (AS). We have demonstrated this method in brain OAR AS on MRI, using internal and external (third-party) AS models. METHODS: Thirty-two retrospectives, MRI planned, glioma cases were randomly selected from a local clinical cohort for ACo training. A generator was trained adversarialy to produce internal autosegmentations (IAS) with a discriminator to estimate voxel-wise IAS uncertainty, given the input MRI. Confidence maps for each proposed segmentation were produced for operator use in AS editing and were compared with "difference to gold-standard" error maps. Nine cases were used for testing ACo performance on IAS and validation with two external deep learning segmentation model predictions [external model with low-quality AS (EM-LQ) and external model with high-quality AS (EM-HQ)]. Matthew's correlation coefficient (MCC), false-positive rate (FPR), false-negative rate (FNR), and visual assessment were used for evaluation. Edge removal and geometric distance corrections were applied to achieve more useful and clinically relevant confidence maps and performance metrics. RESULTS: ACo showed generally excellent performance on both internal and external segmentations, across all OARs (except lenses). MCC was higher on IAS and low-quality external segmentations (EM-LQ) than high-quality ones (EM-HQ). On IAS and EM-LQ, average MCC (excluding lenses) varied from 0.6 to 0.9, while average FPR and FNR were ≤0.13 and ≤0.21, respectively. For EM-HQ, average MCC varied from 0.4 to 0.8, while average FPR and FNR were ≤0.37 and ≤0.22, respectively. CONCLUSION: ACo was a reliable predictor of uncertainty and errors on AS generated both internally and externally, demonstrating its potential as an independent, reference-free QA tool, which could help operators deliver robust, efficient autosegmentation in the radiotherapy clinic.

3.
Artículo en Inglés | MEDLINE | ID: mdl-39307323

RESUMEN

PURPOSE: Online adaptive proton therapy (oAPT) is essential to address interfractional anatomical changes in patients receiving pencil beam scanning proton therapy (PBSPT). Artificial intelligence (AI)-based auto-segmentation can increase the efficiency and accuracy. Linear energy transfer (LET)-based biological effect evaluation can potentially mitigate possible adverse events caused by high LET. New spot arrangement based on the verification CT (vCT) can further improve the re-plan quality. We propose an oAPT workflow that incorporates all these functionalities and validate its clinical implementation feasibility with prostate patients. METHODS AND MATERIALS: AI-based auto-segmentation tool AccuContourTM (Manteia, Xiamen, China) was seamlessly integrated into oAPT. Initial spot arrangement tool on the vCT for re-optimization was implemented using raytracing. An LET-based biological effect evaluation tool was developed to assess the overlap region of high dose and high LET in selected OARs. Eleven prostate cancer patients were retrospectively selected to verify the efficacy and efficiency of the proposed oAPT workflow. The time cost of each component in the workflow was recorded for analysis. RESULTS: The verification plan showed significant degradation of the CTV coverage and rectum and bladder sparing due to the interfractional anatomical changes. Re-optimization on the vCT resulted in great improvement of the plan quality. No overlap regions of high dose and high LET distributions were observed in bladder or rectum in re-plans. 3D Gamma analyses in PSQA confirmed the accuracy of the re-plan doses before delivery (Gamma passing rate = 99.57 ± 0.46%), and after delivery (98.59 ± 1.29%). The robustness of the re-plans passed all clinical requirements. The average time for the complete execution of the workflow was 9.12 ± 0.85 minutes, excluding manual intervention time. CONCLUSION: The AI-facilitated oAPT workflow was demonstrated to be both efficient and effective by generating a re-plan that significantly improved the plan quality in prostate cancer treated with PBSPT.

4.
Clin Transl Radiat Oncol ; 49: 100855, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39308634

RESUMEN

Introduction: Target volume delineation is routinely performed in postoperative radiotherapy (RT) for breast cancer patients, but it is a time-consuming process. The aim of the present study was to validate the quality, clinical usability and institutional-specific implementation of different auto-segmentation tools into clinical routine. Methods: Three different commercially available, artificial intelligence-, ESTRO-guideline-based segmentation models (M1-3) were applied to fifty consecutive reference patients who received postoperative local RT including regional nodal irradiation for breast cancer for the delineation of clinical target volumes: the residual breast, implant or chestwall, axilla levels 1 and 2, the infra- and supraclavicular regions, the interpectoral and internal mammary nodes. Objective evaluation metrics of the created structures were conducted with the Dice similarity index (DICE) and the Hausdorff distance, and a manual evaluation of usability. Results: The resulting geometries of the segmentation models were compared to the reference volumes for each patient and required no or only minor corrections in 72 % (M1), 64 % (M2) and 78 % (M3) of the cases. The median DICE and Hausdorff values for the resulting planning target volumes were 0.87-0.88 and 2.96-3.55, respectively. Clinical usability was significantly correlated with the DICE index, with calculated cut-off values used to define no or minor adjustments of 0.82-0.86. Right or left sided target and breathing method (deep inspiration breath hold vs. free breathing) did not impact the quality of the resulting structures. Conclusion: Artificial intelligence-based auto-segmentation programs showed high-quality accuracy and provided standardization and efficient support for guideline-based target volume contouring as a precondition for fully automated workflows in radiotherapy treatment planning.

5.
Phys Imaging Radiat Oncol ; 32: 100648, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39319094

RESUMEN

Background and purpose: In online adaptive magnetic resonance image (MRI)-guided radiotherapy (MRIgRT), manual contouring of rectal tumors on daily images is labor-intensive and time-consuming. Automation of this task is complex due to substantial variation in tumor shape and location between patients. The aim of this work was to investigate different approaches of propagating patient-specific prior information to the online adaptive treatment fractions to improve deep-learning based auto-segmentation of rectal tumors. Materials and methods: 243 T2-weighted MRI scans of 49 rectal cancer patients treated on the 1.5T MR-Linear accelerator (MR-Linac) were utilized to train models to segment rectal tumors. As benchmark, an MRI_only auto-segmentation model was trained. Three approaches of including a patient-specific prior were studied: 1. include the segmentations of fraction 1 as extra input channel for the auto-segmentation of subsequent fractions, 2. fine-tuning of the MRI_only model to fraction 1 (PSF_1) and 3. fine-tuning of the MRI_only model on all earlier fractions (PSF_cumulative). Auto-segmentations were compared to the manual segmentation using geometric similarity metrics. Clinical impact was assessed by evaluating post-treatment target coverage. Results: All patient-specific methods outperformed the MRI_only segmentation approach. Median 95th percentile Hausdorff (95HD) were 22.0 (range: 6.1-76.6) mm for MRI_only segmentation, 9.9 (range: 2.5-38.2) mm for MRI+prior segmentation, 6.4 (range: 2.4-17.8) mm for PSF_1 and 4.8 (range: 1.7-26.9) mm for PSF_cumulative. PSF_cumulative was found to be superior to PSF_1 from fraction 4 onward (p = 0.014). Conclusion: Patient-specific fine-tuning of automatically segmented rectal tumors, using images and segmentations from all previous fractions, yields superior quality compared to other auto-segmentation approaches.

6.
Phys Med Biol ; 69(19)2024 Sep 27.
Artículo en Inglés | MEDLINE | ID: mdl-39270708

RESUMEN

Objective.To develop and evaluate a 3D Prompt-ResUNet module that utilized the prompt-based model combined with 3D nnUNet for rapid and consistent autosegmentation of high-risk clinical target volume (HRCTV) and organ at risk (OAR) in high-dose-rate brachytherapy for cervical cancer patients.Approach.We used 73 computed tomography scans and 62 magnetic resonance imaging scans from 135 (103 for training, 16 for validation, and 16 for testing) cervical cancer patients across two hospitals for HRCTV and OAR segmentation. A novel comparison of the deep learning neural networks 3D Prompt-ResUNet, nnUNet, and segment anything model-Med3D was applied for the segmentation. Evaluation was conducted in two parts: geometric and clinical assessments. Quantitative metrics included the Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95%), Jaccard index (JI), and Matthews correlation coefficient (MCC). Clinical evaluation involved interobserver comparison, 4-grade expert scoring, and a double-blinded Turing test.Main results.The Prompt-ResUNet model performed most similarly to experienced radiation oncologists, outperforming less experienced ones. During testing, the DSC, HD95% (mm), JI, and MCC value (mean ± SD) for HRCTV were 0.92 ± 0.03, 2.91 ± 0.69, 0.85 ± 0.04, and 0.92 ± 0.02, respectively. For the bladder, these values were 0.93 ± 0.05, 3.07 ± 1.05, 0.87 ± 0.08, and 0.93 ± 0.05, respectively. For the rectum, they were 0.87 ± 0.03, 3.54 ± 1.46, 0.78 ± 0.05, and 0.87 ± 0.03, respectively. For the sigmoid, they were 0.76 ± 0.11, 7.54 ± 5.54, 0.63 ± 0.14, and 0.78 ± 0.09, respectively. The Prompt-ResUNet achieved a clinical viability score of at least 2 in all evaluation cases (100%) for both HRCTV and bladder and exceeded the 30% positive rate benchmark for all evaluated structures in the Turing test.Significance.The Prompt-ResUNet architecture demonstrated high consistency with ground truth in autosegmentation of HRCTV and OARs, reducing interobserver variability and shortening treatment times.


Asunto(s)
Braquiterapia , Aprendizaje Profundo , Órganos en Riesgo , Dosificación Radioterapéutica , Neoplasias del Cuello Uterino , Humanos , Neoplasias del Cuello Uterino/radioterapia , Neoplasias del Cuello Uterino/diagnóstico por imagen , Braquiterapia/métodos , Femenino , Órganos en Riesgo/efectos de la radiación , Procesamiento de Imagen Asistido por Computador/métodos , Dosis de Radiación , Tomografía Computarizada por Rayos X , Planificación de la Radioterapia Asistida por Computador/métodos , Imagenología Tridimensional
7.
Artículo en Inglés | MEDLINE | ID: mdl-39317606

RESUMEN

AIMS: Symptomatic radiation cardiotoxicity affects up to 30% patients with lung cancer and several heart substructure doses are associated with reduced overall survival. A greater focus on minimising cardiotoxicity is now possible due to advancements in radiotherapy technology and the new discipline of cardio-oncology, but uptake of emerging data has not been ascertained. A global cross-sectional analysis of Radiation Oncologists who treat lung cancer was therefore conducted by the International Cardio-Oncology Society in order to establish the impact of recently published literature and guidelines on practice. MATERIALS AND METHODS: A bespoke questionnaire was designed following an extensive review of the literature and from recurring relevant themes presented at Radiation Oncology and Cardio-Oncology research meetings. Six question domains were retained following consensus discussions among the investigators, comprising 55 multiple choice stems: guidelines, cardiovascular assessment, cardiology investigations, radiotherapy planning strategies, primary prevention prescribing and local cardio-oncology service access. An invitation was sent to all Radiation Oncologists registered with ICOS and to Radiation Oncology colleagues of the investigators. RESULTS: In total 118 participants were recruited and 92% were consultant physicians. The ICOS 2021 expert consensus statement was rated as the most useful position paper, followed by the joint ESC-ESTRO 2022 guideline. The majority (80%) of participants indicated that a detailed cardiovascular history was advisable. Although 69% of respondents deemed the availability of cardiac substructure auto-segmentation to be very/quite important, it was implemented by only a few, with the most common being the left anterior descending coronary artery V15. A distinct cardio-oncology service was available to 39% participants, while the remainder utilised general cardiology services. CONCLUSION: The uptake of recent guidelines on cardiovascular optimisation is good, but access to cardiology investigations and consultations, and auto-segmentation, represent barriers to modifying radiotherapy practices in lung cancer to reduce the risk of radiation cardiotoxicity.

8.
J Pers Med ; 14(9)2024 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-39338233

RESUMEN

Adaptive radiotherapy (ART) workflows are increasingly adopted to achieve dose escalation and tissue sparing under dynamic anatomical conditions. However, recontouring and time constraints hinder the implementation of real-time ART workflows. Various auto-segmentation methods, including deformable image registration, atlas-based segmentation, and deep learning-based segmentation (DLS), have been developed to address these challenges. Despite the potential of DLS methods, clinical implementation remains difficult due to the need for large, high-quality datasets to ensure model generalizability. This study introduces an InterVision framework for segmentation. The InterVision framework can interpolate or create intermediate visuals between existing images to generate specific patient characteristics. The InterVision model is trained in two steps: (1) generating a general model using the dataset, and (2) tuning the general model using the dataset generated from the InterVision framework. The InterVision framework generates intermediate images between existing patient image slides using deformable vectors, effectively capturing unique patient characteristics. By creating a more comprehensive dataset that reflects these individual characteristics, the InterVision model demonstrates the ability to produce more accurate contours compared to general models. Models are evaluated using the volumetric dice similarity coefficient (VDSC) and the Hausdorff distance 95% (HD95%) for 18 structures in 20 test patients. As a result, the Dice score was 0.81 ± 0.05 for the general model, 0.82 ± 0.04 for the general fine-tuning model, and 0.85 ± 0.03 for the InterVision model. The Hausdorff distance was 3.06 ± 1.13 for the general model, 2.81 ± 0.77 for the general fine-tuning model, and 2.52 ± 0.50 for the InterVision model. The InterVision model showed the best performance compared to the general model. The InterVision framework presents a versatile approach adaptable to various tasks where prior information is accessible, such as in ART settings. This capability is particularly valuable for accurately predicting complex organs and targets that pose challenges for traditional deep learning algorithms.

9.
Cancer Radiother ; 28(4): 354-364, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39147623

RESUMEN

PURPOSE: This study aimed to design an autodelineation model based on convolutional neural networks for generating high-risk clinical target volumes and organs at risk in image-guided adaptive brachytherapy for cervical cancer. MATERIALS AND METHODS: A novel SERes-u-net was trained and tested using CT scans from 98 patients with locally advanced cervical cancer who underwent image-guided adaptive brachytherapy. The Dice similarity coefficient, 95th percentile Hausdorff distance, and clinical assessment were used for evaluation. RESULTS: The mean Dice similarity coefficients of our model were 80.8%, 91.9%, 85.2%, 60.4%, and 82.8% for the high-risk clinical target volumes, bladder, rectum, sigmoid, and bowel loops, respectively. The corresponding 95th percentile Hausdorff distances were 5.23mm, 4.75mm, 4.06mm, 30.0mm, and 20.5mm. The evaluation results revealed that 99.3% of the convolutional neural networks-generated high-risk clinical target volumes slices were acceptable for oncologist A and 100% for oncologist B. Most segmentations of the organs at risk were clinically acceptable, except for the 25% sigmoid, which required significant revision in the opinion of oncologist A. There was a significant difference in the clinical evaluation of convolutional neural networks-generated high-risk clinical target volumes between the two oncologists (P<0.001), whereas the score differences of the organs at risk were not significant between the two oncologists. In the consistency evaluation, a large discrepancy was observed between senior and junior clinicians. About 40% of SERes-u-net-generated contours were thought to be better by junior clinicians. CONCLUSION: The high-risk clinical target volumes and organs at risk of cervical cancer generated by the proposed convolutional neural networks model can be used clinically, potentially improving segmentation consistency and efficiency of contouring in image-guided adaptive brachytherapy workflow.


Asunto(s)
Braquiterapia , Redes Neurales de la Computación , Órganos en Riesgo , Radioterapia Guiada por Imagen , Recto , Neoplasias del Cuello Uterino , Humanos , Neoplasias del Cuello Uterino/radioterapia , Neoplasias del Cuello Uterino/diagnóstico por imagen , Neoplasias del Cuello Uterino/patología , Braquiterapia/métodos , Órganos en Riesgo/diagnóstico por imagen , Órganos en Riesgo/efectos de la radiación , Femenino , Radioterapia Guiada por Imagen/métodos , Recto/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Vejiga Urinaria/diagnóstico por imagen , Vejiga Urinaria/efectos de la radiación , Colon Sigmoide/diagnóstico por imagen , Planificación de la Radioterapia Asistida por Computador/métodos , Persona de Mediana Edad , Adulto
10.
Biomed Phys Eng Express ; 10(5)2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39127060

RESUMEN

Objective.Target volumes for radiotherapy are usually contoured manually, which can be time-consuming and prone to inter- and intra-observer variability. Automatic contouring by convolutional neural networks (CNN) can be fast and consistent but may produce unrealistic contours or miss relevant structures. We evaluate approaches for increasing the quality and assessing the uncertainty of CNN-generated contours of head and neck cancers with PET/CT as input.Approach.Two patient cohorts with head and neck squamous cell carcinoma and baseline18F-fluorodeoxyglucose positron emission tomography and computed tomography images (FDG-PET/CT) were collected retrospectively from two centers. The union of manual contours of the gross primary tumor and involved nodes was used to train CNN models for generating automatic contours. The impact of image preprocessing, image augmentation, transfer learning and CNN complexity, architecture, and dimension (2D or 3D) on model performance and generalizability across centers was evaluated. A Monte Carlo dropout technique was used to quantify and visualize the uncertainty of the automatic contours.Main results. CNN models provided contours with good overlap with the manually contoured ground truth (median Dice Similarity Coefficient: 0.75-0.77), consistent with reported inter-observer variations and previous auto-contouring studies. Image augmentation and model dimension, rather than model complexity, architecture, or advanced image preprocessing, had the largest impact on model performance and cross-center generalizability. Transfer learning on a limited number of patients from a separate center increased model generalizability without decreasing model performance on the original training cohort. High model uncertainty was associated with false positive and false negative voxels as well as low Dice coefficients.Significance.High quality automatic contours can be obtained using deep learning architectures that are not overly complex. Uncertainty estimation of the predicted contours shows potential for highlighting regions of the contour requiring manual revision or flagging segmentations requiring manual inspection and intervention.


Asunto(s)
Aprendizaje Profundo , Neoplasias de Cabeza y Cuello , Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Tomografía de Emisión de Positrones , Humanos , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Incertidumbre , Procesamiento de Imagen Asistido por Computador/métodos , Estudios Retrospectivos , Fluorodesoxiglucosa F18 , Redes Neurales de la Computación , Algoritmos
11.
Artículo en Inglés | MEDLINE | ID: mdl-39117164

RESUMEN

PURPOSE: Artificial intelligence (AI)-aided methods have made significant progress in the auto-delineation of normal tissues. However, these approaches struggle with the auto-contouring of radiotherapy target volume. Our goal is to model the delineation of target volume as a clinical decision-making problem, resolved by leveraging large language model-aided multimodal learning approaches. METHODS AND MATERIALS: A vision-language model, termed Medformer, has been developed, employing the hierarchical vision transformer as its backbone, and incorporating large language models to extract text-rich features. The contextually embedded linguistic features are seamlessly integrated into visual features for language-aware visual encoding through the visual language attention module. Metrics, including Dice similarity coefficient (DSC), intersection over union (IOU), and 95th percentile Hausdorff distance (HD95), were used to quantitatively evaluate the performance of our model. The evaluation was conducted on an in-house prostate cancer dataset and a public oropharyngeal carcinoma (OPC) dataset, totaling 668 subjects. RESULTS: Our Medformer achieved a DSC of 0.81 ± 0.10 versus 0.72 ± 0.10, IOU of 0.73 ± 0.12 versus 0.65 ± 0.09, and HD95 of 9.86 ± 9.77 mm versus 19.13 ± 12.96 mm for delineation of gross tumor volume (GTV) on the prostate cancer dataset. Similarly, on the OPC dataset, it achieved a DSC of 0.77 ± 0.11 versus 0.72 ± 0.09, IOU of 0.70 ± 0.09 versus 0.65 ± 0.07, and HD95 of 7.52 ± 4.8 mm versus 13.63 ± 7.13 mm, representing significant improvements (p < 0.05). For delineating the clinical target volume (CTV), Medformer achieved a DSC of 0.91 ± 0.04, IOU of 0.85 ± 0.05, and HD95 of 2.98 ± 1.60 mm, comparable to other state-of-the-art algorithms. CONCLUSIONS: Auto-delineation of the treatment target based on multimodal learning outperforms conventional approaches that rely purely on visual features. Our method could be adopted into routine practice to rapidly contour CTV/GTV.

12.
J Appl Clin Med Phys ; : e14461, 2024 Aug 02.
Artículo en Inglés | MEDLINE | ID: mdl-39092893

RESUMEN

The accuracy of artificial intelligence (AI) generated contours for intact-breast and post-mastectomy radiotherapy plans was evaluated. Geometric and dosimetric comparisons were performed between auto-contours (ACs) and manual-contours (MCs) produced by physicians for target structures. Breast and regional nodal structures were manually delineated on 66 breast cancer patients. ACs were retrospectively generated. The characteristics of the breast/post-mastectomy chestwall (CW) and regional nodal structures (axillary [AxN], supraclavicular [SC], internal mammary [IM]) were geometrically evaluated by Dice similarity coefficient (DSC), mean surface distance, and Hausdorff Distance. The structures were also evaluated dosimetrically by superimposing the MC clinically delivered plans onto the ACs to assess the impact of utilizing ACs with target dose (Vx%) evaluation. Positive geometric correlations between volume and DSC for intact-breast, AxN, and CW were observed. Little or anti correlations between volume and DSC for IM and SC were shown. For intact-breast plans, insignificant dosimetric differences between ACs and MCs were observed for AxNV95% (p = 0.17) and SCV95% (p = 0.16), while IMNV90% ACs and MCs were significantly different. The average V95% for intact-breast MCs (98.4%) and ACs (97.1%) were comparable but statistically different (p = 0.02). For post-mastectomy plans, AxNV95% (p = 0.35) and SCV95% (p = 0.08) were consistent between ACs and MCs, while IMNV90% was significantly different. Additionally, 94.1% of AC-breasts met ΔV95% variation <5% when DSC > 0.7. However, only 62.5% AC-CWs achieved the same metrics, despite AC-CWV95% (p = 0.43) being statistically insignificant. The AC intact-breast structure was dosimetrically similar to MCs. The AC AxN and SC may require manual adjustments. Careful review should be performed for AC post-mastectomy CW and IMN before treatment planning. The findings of this study may guide the clinical decision-making process for the utilization of AI-driven ACs for intact-breast and post-mastectomy plans. Before clinical implementation of this auto-segmentation software, an in-depth assessment of agreement with each local facilities MCs is needed.

13.
Cancer Sci ; 115(10): 3415-3425, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39119927

RESUMEN

A precise radiotherapy plan is crucial to ensure accurate segmentation of glioblastomas (GBMs) for radiation therapy. However, the traditional manual segmentation process is labor-intensive and heavily reliant on the experience of radiation oncologists. In this retrospective study, a novel auto-segmentation method is proposed to address these problems. To assess the method's applicability across diverse scenarios, we conducted its development and evaluation using a cohort of 148 eligible patients drawn from four multicenter datasets and retrospective data collection including noncontrast CT, multisequence MRI scans, and corresponding medical records. All patients were diagnosed with histologically confirmed high-grade glioma (HGG). A deep learning-based method (PKMI-Net) for automatically segmenting gross tumor volume (GTV) and clinical target volumes (CTV1 and CTV2) of GBMs was proposed by leveraging prior knowledge from multimodal imaging. The proposed PKMI-Net demonstrated high accuracy in segmenting, respectively, GTV, CTV1, and CTV2 in an 11-patient test set, achieving Dice similarity coefficients (DSC) of 0.94, 0.95, and 0.92; 95% Hausdorff distances (HD95) of 2.07, 1.18, and 3.95 mm; average surface distances (ASD) of 0.69, 0.39, and 1.17 mm; and relative volume differences (RVD) of 5.50%, 9.68%, and 3.97%. Moreover, the vast majority of GTV, CTV1, and CTV2 produced by PKMI-Net are clinically acceptable and require no revision for clinical practice. In our multicenter evaluation, the PKMI-Net exhibited consistent and robust generalizability across the various datasets, demonstrating its effectiveness in automatically segmenting GBMs. The proposed method using prior knowledge in multimodal imaging can improve the contouring accuracy of GBMs, which holds the potential to improve the quality and efficiency of GBMs' radiotherapy.


Asunto(s)
Neoplasias Encefálicas , Aprendizaje Profundo , Glioblastoma , Imagen por Resonancia Magnética , Imagen Multimodal , Humanos , Glioblastoma/diagnóstico por imagen , Glioblastoma/radioterapia , Glioblastoma/patología , Estudios Retrospectivos , Imagen Multimodal/métodos , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/radioterapia , Neoplasias Encefálicas/patología , Imagen por Resonancia Magnética/métodos , Masculino , Femenino , Persona de Mediana Edad , Tomografía Computarizada por Rayos X/métodos , Carga Tumoral , Anciano , Adulto , Planificación de la Radioterapia Asistida por Computador/métodos
14.
Artículo en Inglés | MEDLINE | ID: mdl-38957573

RESUMEN

Medical image auto-segmentation techniques are basic and critical for numerous image-based analysis applications that play an important role in developing advanced and personalized medicine. Compared with manual segmentations, auto-segmentations are expected to contribute to a more efficient clinical routine and workflow by requiring fewer human interventions or revisions to auto-segmentations. However, current auto-segmentation methods are usually developed with the help of some popular segmentation metrics that do not directly consider human correction behavior. Dice Coefficient (DC) focuses on the truly-segmented areas, while Hausdorff Distance (HD) only measures the maximal distance between the auto-segmentation boundary with the ground truth boundary. Boundary length-based metrics such as surface DC (surDC) and Added Path Length (APL) try to distinguish truly-predicted boundary pixels and wrong ones. It is uncertain if these metrics can reliably indicate the required manual mending effort for application in segmentation research. Therefore, in this paper, the potential use of the above four metrics, as well as a novel metric called Mendability Index (MI), to predict the human correction effort is studied with linear and support vector regression models. 265 3D computed tomography (CT) samples for 3 objects of interest from 3 institutions with corresponding auto-segmentations and ground truth segmentations are utilized to train and test the prediction models. The five-fold cross-validation experiments demonstrate that meaningful human effort prediction can be achieved using segmentation metrics with varying prediction errors for different objects. The improved variant of MI, called MIhd, generally shows the best prediction performance, suggesting its potential to indicate reliably the clinical value of auto-segmentations.

15.
Front Oncol ; 14: 1375096, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39055552

RESUMEN

Purpose: To evaluate organ at risk (OAR) auto-segmentation in the head and neck region of computed tomography images using two different commercially available deep-learning-based auto-segmentation (DLAS) tools in a single institutional clinical applications. Methods: Twenty-two OARs were manually contoured by clinicians according to published guidelines on planning computed tomography (pCT) images for 40 clinical head and neck cancer (HNC) cases. Automatic contours were generated for each patient using two deep-learning-based auto-segmentation models-Manteia AccuContour and MIM ProtégéAI. The accuracy and integrity of autocontours (ACs) were then compared to expert contours (ECs) using the Sørensen-Dice similarity coefficient (DSC) and Mean Distance (MD) metrics. Results: ACs were generated for 22 OARs using AccuContour and 17 OARs using ProtégéAI with average contour generation time of 1 min/patient and 5 min/patient respectively. EC and AC agreement was highest for the mandible (DSC 0.90 ± 0.16) and (DSC 0.91 ± 0.03), and lowest for the chiasm (DSC 0.28 ± 0.14) and (DSC 0.30 ± 0.14) for AccuContour and ProtégéAI respectively. Using AccuContour, the average MD was<1mm for 10 of the 22 OARs contoured, 1-2mm for 6 OARs, and 2-3mm for 6 OARs. For ProtégéAI, the average mean distance was<1mm for 8 out of 17 OARs, 1-2mm for 6 OARs, and 2-3mm for 3 OARs. Conclusions: Both DLAS programs were proven to be valuable tools to significantly reduce the time required to generate large amounts of OAR contours in the head and neck region, even though manual editing of ACs is likely needed prior to implementation into treatment planning. The DSCs and MDs achieved were similar to those reported in other studies that evaluated various other DLAS solutions. Still, small volume structures with nonideal contrast in CT images, such as nerves, are very challenging and will require additional solutions to achieve sufficient results.

16.
Comput Med Imaging Graph ; 116: 102403, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38878632

RESUMEN

BACKGROUND AND OBJECTIVES: Bio-medical image segmentation models typically attempt to predict one segmentation that resembles a ground-truth structure as closely as possible. However, as medical images are not perfect representations of anatomy, obtaining this ground truth is not possible. A surrogate commonly used is to have multiple expert observers define the same structure for a dataset. When multiple observers define the same structure on the same image there can be significant differences depending on the structure, image quality/modality and the region being defined. It is often desirable to estimate this type of aleatoric uncertainty in a segmentation model to help understand the region in which the true structure is likely to be positioned. Furthermore, obtaining these datasets is resource intensive so training such models using limited data may be required. With a small dataset size, differing patient anatomy is likely not well represented causing epistemic uncertainty which should also be estimated so it can be determined for which cases the model is effective or not. METHODS: We use a 3D probabilistic U-Net to train a model from which several segmentations can be sampled to estimate the range of uncertainty seen between multiple observers. To ensure that regions where observers disagree most are emphasised in model training, we expand the Generalised Evidence Lower Bound (ELBO) with a Constrained Optimisation (GECO) loss function with an additional contour loss term to give attention to this region. Ensemble and Monte-Carlo dropout (MCDO) uncertainty quantification methods are used during inference to estimate model confidence on an unseen case. We apply our methodology to two radiotherapy clinical trial datasets, a gastric cancer trial (TOPGEAR, TROG 08.08) and a post-prostatectomy prostate cancer trial (RAVES, TROG 08.03). Each dataset contains only 10 cases each for model development to segment the clinical target volume (CTV) which was defined by multiple observers on each case. An additional 50 cases are available as a hold-out dataset for each trial which had only one observer define the CTV structure on each case. Up to 50 samples were generated using the probabilistic model for each case in the hold-out dataset. To assess performance, each manually defined structure was matched to the closest matching sampled segmentation based on commonly used metrics. RESULTS: The TOPGEAR CTV model achieved a Dice Similarity Coefficient (DSC) and Surface DSC (sDSC) of 0.7 and 0.43 respectively with the RAVES model achieving 0.75 and 0.71 respectively. Segmentation quality across cases in the hold-out datasets was variable however both the ensemble and MCDO uncertainty estimation approaches were able to accurately estimate model confidence with a p-value < 0.001 for both TOPGEAR and RAVES when comparing the DSC using the Pearson correlation coefficient. CONCLUSIONS: We demonstrated that training auto-segmentation models which can estimate aleatoric and epistemic uncertainty using limited datasets is possible. Having the model estimate prediction confidence is important to understand for which unseen cases a model is likely to be useful.


Asunto(s)
Imagenología Tridimensional , Humanos , Incertidumbre , Imagenología Tridimensional/métodos , Neoplasias de la Próstata/radioterapia , Neoplasias de la Próstata/diagnóstico por imagen , Masculino , Ensayos Clínicos como Asunto , Conjuntos de Datos como Asunto , Algoritmos , Tomografía Computarizada por Rayos X
17.
Med Phys ; 2024 Jun 19.
Artículo en Inglés | MEDLINE | ID: mdl-38896829

RESUMEN

BACKGROUND: Head and neck (HN) gross tumor volume (GTV) auto-segmentation is challenging due to the morphological complexity and low image contrast of targets. Multi-modality images, including computed tomography (CT) and positron emission tomography (PET), are used in the routine clinic to assist radiation oncologists for accurate GTV delineation. However, the availability of PET imaging may not always be guaranteed. PURPOSE: To develop a deep learning segmentation framework for automated GTV delineation of HN cancers using a combination of PET/CT images, while addressing the challenge of missing PET data. METHODS: Two datasets were included for this study: Dataset I: 524 (training) and 359 (testing) oropharyngeal cancer patients from different institutions with their PET/CT pairs provided by the HECKTOR Challenge; Dataset II: 90 HN patients(testing) from a local institution with their planning CT, PET/CT pairs. To handle potentially missing PET images, a model training strategy named the "Blank Channel" method was implemented. To simulate the absence of a PET image, a blank array with the same dimensions as the CT image was generated to meet the dual-channel input requirement of the deep learning model. During the model training process, the model was randomly presented with either a real PET/CT pair or a blank/CT pair. This allowed the model to learn the relationship between the CT image and the corresponding GTV delineation based on available modalities. As a result, our model had the ability to handle flexible inputs during prediction, making it suitable for cases where PET images are missing. To evaluate the performance of our proposed model, we trained it using training patients from Dataset I and tested it with Dataset II. We compared our model (Model 1) with two other models which were trained for specific modality segmentations: Model 2 trained with only CT images, and Model 3 trained with real PET/CT pairs. The performance of the models was evaluated using quantitative metrics, including Dice similarity coefficient (DSC), mean surface distance (MSD), and 95% Hausdorff Distance (HD95). In addition, we evaluated our Model 1 and Model 3 using the 359 test cases in Dataset I. RESULTS: Our proposed model(Model 1) achieved promising results for GTV auto-segmentation using PET/CT images, with the flexibility of missing PET images. Specifically, when assessed with only CT images in Dataset II, Model 1 achieved DSC of 0.56 ± 0.16, MSD of 3.4 ± 2.1 mm, and HD95 of 13.9 ± 7.6 mm. When the PET images were included, the performance of our model was improved to DSC of 0.62 ± 0.14, MSD of 2.8 ± 1.7 mm, and HD95 of 10.5 ± 6.5 mm. These results are comparable to those achieved by Model 2 and Model 3, illustrating Model 1's effectiveness in utilizing flexible input modalities. Further analysis using the test dataset from Dataset I showed that Model 1 achieved an average DSC of 0.77, surpassing the overall average DSC of 0.72 among all participants in the HECKTOR Challenge. CONCLUSIONS: We successfully refined a multi-modal segmentation tool for accurate GTV delineation for HN cancer. Our method addressed the issue of missing PET images by allowing flexible data input, thereby providing a practical solution for clinical settings where access to PET imaging may be limited.

18.
Dose Response ; 22(2): 15593258241263687, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38912333

RESUMEN

Background and Purpose: Artificial intelligence (AI) is a technique which tries to think like humans and mimic human behaviors. It has been considered as an alternative in a lot of human-dependent steps in radiotherapy (RT), since the human participation is a principal uncertainty source in RT. The aim of this work is to provide a systematic summary of the current literature on AI application for RT, and to clarify its role for RT practice in terms of clinical views. Materials and Methods: A systematic literature search of PubMed and Google Scholar was performed to identify original articles involving the AI applications in RT from the inception to 2022. Studies were included if they reported original data and explored the clinical applications of AI in RT. Results: The selected studies were categorized into three aspects of RT: organ and lesion segmentation, treatment planning and quality assurance. For each aspect, this review discussed how these AI tools could be involved in the RT protocol. Conclusions: Our study revealed that AI was a potential alternative for the human-dependent steps in the complex process of RT.

19.
In Vivo ; 38(4): 1712-1718, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38936930

RESUMEN

BACKGROUND/AIM: Intensity-modulated radiation therapy can deliver a highly conformal dose to a target while minimizing the dose to the organs at risk (OARs). Delineating the contours of OARs is time-consuming, and various automatic contouring software programs have been employed to reduce the delineation time. However, some software operations are manual, and further reduction in time is possible. This study aimed to automate running atlas-based auto-segmentation (ABAS) and software operations using a scripting function, thereby reducing work time. MATERIALS AND METHODS: Dice coefficient and Hausdorff distance were used to determine geometric accuracy. The manual delineation, automatic delineation, and modification times were measured. While modifying the contours, the degree of subjective correction was rated on a four-point scale. RESULTS: The model exhibited generally good geometric accuracy. However, some OARs, such as the chiasm, optic nerve, retina, lens, and brain require improvement. The average contour delineation time was reduced from 57 to 29 min (p<0.05). The subjective revision degree results indicated that all OARs required minor modifications; only the submandibular gland, thyroid, and esophagus were rated as modified from scratch. CONCLUSION: The ABAS model and scripted automation in head and neck cancer reduced the work time and software operations. The time can be further reduced by improving contour accuracy.


Asunto(s)
Neoplasias de Cabeza y Cuello , Órganos en Riesgo , Planificación de la Radioterapia Asistida por Computador , Radioterapia de Intensidad Modulada , Programas Informáticos , Humanos , Neoplasias de Cabeza y Cuello/radioterapia , Planificación de la Radioterapia Asistida por Computador/métodos , Radioterapia de Intensidad Modulada/métodos , Dosificación Radioterapéutica , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos
20.
Phys Med ; 123: 103393, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38852363

RESUMEN

BACKGROUND AND PURPOSE: One of the current roadblocks to the widespread use of Total Marrow Irradiation (TMI) and Total Marrow and Lymphoid Irradiation (TMLI) is the challenging difficulties in tumor target contouring workflow. This study aims to develop a hybrid neural network model that promotes accurate, automatic, and rapid segmentation of multi-class clinical target volumes. MATERIALS AND METHODS: Patients who underwent TMI and TMLI from January 2018 to May 2022 were included. Two independent oncologists manually contoured eight target volumes for patients on CT images. A novel Dual-Encoder Alignment Network (DEA-Net) was developed and trained using 46 patients from one internal institution and independently evaluated on a total of 39 internal and external patients. Performance was evaluated on accuracy metrics and delineation time. RESULTS: The DEA-Net achieved a mean dice similarity coefficient of 90.1 % ± 1.8 % for internal testing dataset (23 patients) and 91.1 % ± 2.5 % for external testing dataset (16 patients). The 95 % Hausdorff distance and average symmetric surface distance were 2.04 ± 0.62 mm and 0.57 ± 0.11 mm for internal testing dataset, and 2.17 ± 0.68 mm, and 0.57 ± 0.20 mm for external testing dataset, respectively, outperforming most of existing state-of-the-art methods. In addition, the automatic segmentation workflow reduced delineation time by 98 % compared to the conventional manual contouring process (mean 173 ± 29 s vs. 12168 ± 1690 s; P < 0.001). Ablation study validate the effectiveness of hybrid structures. CONCLUSION: The proposed deep learning framework achieved comparable or superior target volume delineation accuracy, significantly accelerating the radiotherapy planning process.


Asunto(s)
Médula Ósea , Aprendizaje Profundo , Planificación de la Radioterapia Asistida por Computador , Humanos , Médula Ósea/efectos de la radiación , Médula Ósea/diagnóstico por imagen , Planificación de la Radioterapia Asistida por Computador/métodos , Irradiación Linfática/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X , Masculino , Femenino
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...