Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Radiother Oncol ; 192: 110110, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38272314

ABSTRACT

PURPOSE: One-table treatments with treatment imaging, preparation and delivery occurring at one treatment couch, could increase patients' comfort and throughput for palliative treatments. On regular C-arm linacs, however, cone-beam CT (CBCT) imaging quality is currently insufficient. Therefore, our goal was to assess the suitability of AI-generated CBCT based synthetic CT (sCT) images for target delineation and treatment planning for palliative radiotherapy. MATERIALS AND METHODS: CBCTs and planning CT-scans of 22 female patients with pelvic bone metastasis were included. For each CBCT, a corresponding sCT image was generated by a deep learning model in ADMIRE 3.38.0. Radiation oncologists delineated 23 target volumes (TV) on the sCTs (TVsCT) and scored their delineation confidence. The delineations were transferred to planning CTs and manually adjusted if needed to yield gold standard target volumes (TVclin). TVsCT were geometrically compared to TVclin using Dice coefficient (DC) and Hausdorff Distance (HD). The dosimetric impact of TVsCT inaccuracies was evaluated for VMAT plans with different PTV margins. RESULTS: Radiation oncologists scored the sCT quality as sufficient for 13/23 TVsCT (median: DC = 0.9, HD = 11 mm) and insufficient for 10/23 TVsCT (median: DC = 0.7, HD = 34 mm). For the sufficient category, remaining inaccuracies could be compensated by +1 to +4 mm additional margin to achieve coverage of V95% > 95% and V95% > 98%, respectively in 12/13 TVsCT. CONCLUSION: The evaluated sCT quality allowed for accurate delineation for most targets. sCTs with insufficient quality could be identified accurately upfront. A moderate PTV margin expansion could address remaining delineation inaccuracies. Therefore, these findings support further exploration of CBCT based one-table treatments on C-arm linacs.


Subject(s)
Pelvic Bones , Spiral Cone-Beam Computed Tomography , Humans , Female , Palliative Care , Pelvis , Tomography, X-Ray Computed , Cone-Beam Computed Tomography/methods , Radiotherapy Planning, Computer-Assisted/methods , Radiotherapy Dosage
2.
Med Phys ; 51(1): 278-291, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37475466

ABSTRACT

BACKGROUND: In order to accurately accumulate delivered dose for head and neck cancer patients treated with the Adapt to Position workflow on the 1.5T magnetic resonance imaging (MRI)-linear accelerator (MR-linac), the low-resolution T2-weighted MRIs used for daily setup must be segmented to enable reconstruction of the delivered dose at each fraction. PURPOSE: In this pilot study, we evaluate various autosegmentation methods for head and neck organs at risk (OARs) on on-board setup MRIs from the MR-linac for off-line reconstruction of delivered dose. METHODS: Seven OARs (parotid glands, submandibular glands, mandible, spinal cord, and brainstem) were contoured on 43 images by seven observers each. Ground truth contours were generated using a simultaneous truth and performance level estimation (STAPLE) algorithm. Twenty total autosegmentation methods were evaluated in ADMIRE: 1-9) atlas-based autosegmentation using a population atlas library (PAL) of 5/10/15 patients with STAPLE, patch fusion (PF), random forest (RF) for label fusion; 10-19) autosegmentation using images from a patient's 1-4 prior fractions (individualized patient prior [IPP]) using STAPLE/PF/RF; 20) deep learning (DL) (3D ResUNet trained on 43 ground truth structure sets plus 45 contoured by one observer). Execution time was measured for each method. Autosegmented structures were compared to ground truth structures using the Dice similarity coefficient, mean surface distance (MSD), Hausdorff distance (HD), and Jaccard index (JI). For each metric and OAR, performance was compared to the inter-observer variability using Dunn's test with control. Methods were compared pairwise using the Steel-Dwass test for each metric pooled across all OARs. Further dosimetric analysis was performed on three high-performing autosegmentation methods (DL, IPP with RF and 4 fractions [IPP_RF_4], IPP with 1 fraction [IPP_1]), and one low-performing (PAL with STAPLE and 5 atlases [PAL_ST_5]). For five patients, delivered doses from clinical plans were recalculated on setup images with ground truth and autosegmented structure sets. Differences in maximum and mean dose to each structure between the ground truth and autosegmented structures were calculated and correlated with geometric metrics. RESULTS: DL and IPP methods performed best overall, all significantly outperforming inter-observer variability and with no significant difference between methods in pairwise comparison. PAL methods performed worst overall; most were not significantly different from the inter-observer variability or from each other. DL was the fastest method (33 s per case) and PAL methods the slowest (3.7-13.8 min per case). Execution time increased with a number of prior fractions/atlases for IPP and PAL. For DL, IPP_1, and IPP_RF_4, the majority (95%) of dose differences were within ± 250 cGy from ground truth, but outlier differences up to 785 cGy occurred. Dose differences were much higher for PAL_ST_5, with outlier differences up to 1920 cGy. Dose differences showed weak but significant correlations with all geometric metrics (R2 between 0.030 and 0.314). CONCLUSIONS: The autosegmentation methods offering the best combination of performance and execution time are DL and IPP_1. Dose reconstruction on on-board T2-weighted MRIs is feasible with autosegmented structures with minimal dosimetric variation from ground truth, but contours should be visually inspected prior to dose reconstruction in an end-to-end dose accumulation workflow.


Subject(s)
Head and Neck Neoplasms , Radiotherapy Planning, Computer-Assisted , Humans , Pilot Projects , Workflow , Radiotherapy Planning, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Head and Neck Neoplasms/diagnostic imaging , Head and Neck Neoplasms/radiotherapy , Magnetic Resonance Imaging/methods , Organs at Risk
3.
Front Oncol ; 13: 1209558, 2023.
Article in English | MEDLINE | ID: mdl-37483486

ABSTRACT

Introduction: Multi-sequence multi-parameter MRIs are often used to define targets and/or organs at risk (OAR) in radiation therapy (RT) planning. Deep learning has so far focused on developing auto-segmentation models based on a single MRI sequence. The purpose of this work is to develop a multi-sequence deep learning based auto-segmentation (mS-DLAS) based on multi-sequence abdominal MRIs. Materials and methods: Using a previously developed 3DResUnet network, a mS-DLAS model using 4 T1 and T2 weighted MRI acquired during routine RT simulation for 71 cases with abdominal tumors was trained and tested. Strategies including data pre-processing, Z-normalization approach, and data augmentation were employed. Additional 2 sequence specific T1 weighted (T1-M) and T2 weighted (T2-M) models were trained to evaluate performance of sequence-specific DLAS. Performance of all models was quantitatively evaluated using 6 surface and volumetric accuracy metrics. Results: The developed DLAS models were able to generate reasonable contours of 12 upper abdomen organs within 21 seconds for each testing case. The 3D average values of dice similarity coefficient (DSC), mean distance to agreement (MDA mm), 95 percentile Hausdorff distance (HD95% mm), percent volume difference (PVD), surface DSC (sDSC), and relative added path length (rAPL mm/cc) over all organs were 0.87, 1.79, 7.43, -8.95, 0.82, and 12.25, respectively, for mS-DLAS model. Collectively, 71% of the auto-segmented contours by the three models had relatively high quality. Additionally, the obtained mS-DLAS successfully segmented 9 out of 16 MRI sequences that were not used in the model training. Conclusion: We have developed an MRI-based mS-DLAS model for auto-segmenting of upper abdominal organs on MRI. Multi-sequence segmentation is desirable in routine clinical practice of RT for accurate organ and target delineation, particularly for abdominal tumors. Our work will act as a stepping stone for acquiring fast and accurate segmentation on multi-contrast MRI and make way for MR only guided radiation therapy.

4.
Phys Med Biol ; 68(12)2023 06 15.
Article in English | MEDLINE | ID: mdl-37253374

ABSTRACT

Objective. In the current MR-Linac online adaptive workflow, air regions on the MR images need to be manually delineated for abdominal targets, and then overridden by air density for dose calculation. Auto-delineation of these regions is desirable for speed purposes, but poses a challenge, since unlike computed tomography, they do not occupy all dark regions on the image. The purpose of this study is to develop an automated method to segment the air regions on MRI-guided adaptive radiation therapy (MRgART) of abdominal tumors.Approach. A modified ResUNet3D deep learning (DL)-based auto air delineation model was trained using 102 patients' MR images. The MR images were acquired by a dedicated in-house sequence named 'Air-Scan', which is designed to generate air regions that are especially dark and accentuated. The air volumes generated by the newly developed DL model were compared with the manual air contours using geometric similarity (Dice Similarity Coefficient (DSC)), and dosimetric equivalence using Gamma index and dose-volume parameters.Main results. The average DSC agreement between the DL generated and manual air contours is 99% ± 1%. The gamma index between the dose calculations with overriding the DL versus manual air volumes with density of 0.01 is 97% ± 2% for a local gamma calculation with a tolerance of 2% and 2 mm. The dosimetric parameters from planning target volume-PTV and organs at risk-OARs were all within 1% between when DL versus manual contours were overridden by air density. The model runs in less than five seconds on a PC with 28 Core processor and NVIDIA Quadro®P2000 GPU.Significance: a DL based automated segmentation method was developed to generate air volumes on specialized abdominal MR images and generate results that are practically equivalent to the manual contouring of air volumes.


Subject(s)
Abdominal Neoplasms , Deep Learning , Humans , Radiotherapy Planning, Computer-Assisted/methods , Abdominal Neoplasms/diagnostic imaging , Abdominal Neoplasms/radiotherapy , Tomography, X-Ray Computed/methods , Magnetic Resonance Imaging/methods , Image Processing, Computer-Assisted/methods
5.
Front Oncol ; 12: 975902, 2022.
Article in English | MEDLINE | ID: mdl-36425548

ABSTRACT

Background: Quick magnetic resonance imaging (MRI) scans with low contrast-to-noise ratio are typically acquired for daily MRI-guided radiotherapy setup. However, for patients with head and neck (HN) cancer, these images are often insufficient for discriminating target volumes and organs at risk (OARs). In this study, we investigated a deep learning (DL) approach to generate high-quality synthetic images from low-quality images. Methods: We used 108 unique HN image sets of paired 2-minute T2-weighted scans (2mMRI) and 6-minute T2-weighted scans (6mMRI). 90 image sets (~20,000 slices) were used to train a 2-dimensional generative adversarial DL model that utilized 2mMRI as input and 6mMRI as output. Eighteen image sets were used to test model performance. Similarity metrics, including the mean squared error (MSE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) were calculated between normalized synthetic 6mMRI and ground-truth 6mMRI for all test cases. In addition, a previously trained OAR DL auto-segmentation model was used to segment the right parotid gland, left parotid gland, and mandible on all test case images. Dice similarity coefficients (DSC) were calculated between 2mMRI and either ground-truth 6mMRI or synthetic 6mMRI for each OAR; two one-sided t-tests were applied between the ground-truth and synthetic 6mMRI to determine equivalence. Finally, a visual Turing test using paired ground-truth and synthetic 6mMRI was performed using three clinician observers; the percentage of images that were correctly identified was compared to random chance using proportion equivalence tests. Results: The median similarity metrics across the whole images were 0.19, 0.93, and 33.14 for MSE, SSIM, and PSNR, respectively. The median of DSCs comparing ground-truth vs. synthetic 6mMRI auto-segmented OARs were 0.86 vs. 0.85, 0.84 vs. 0.84, and 0.82 vs. 0.85 for the right parotid gland, left parotid gland, and mandible, respectively (equivalence p<0.05 for all OARs). The percent of images correctly identified was equivalent to chance (p<0.05 for all observers). Conclusions: Using 2mMRI inputs, we demonstrate that DL-generated synthetic 6mMRI outputs have high similarity to ground-truth 6mMRI, but further improvements can be made. Our study facilitates the clinical incorporation of synthetic MRI in MRI-guided radiotherapy.

6.
Adv Radiat Oncol ; 7(5): 100968, 2022.
Article in English | MEDLINE | ID: mdl-35847549

ABSTRACT

Purpose: Fast and accurate auto-segmentation on daily images is essential for magnetic resonance imaging (MRI)-guided adaptive radiation therapy (ART). However, the state-of-the-art auto-segmentation based on deep learning still has limited success, particularly for complex structures in the abdomen. This study aimed to develop an automatic contour refinement (ACR) process to quickly correct for unacceptable auto-segmented contours. Methods and Materials: An improved level set-based active contour model (ACM) was implemented for the ACR process and was tested on the deep learning-based auto-segmentation of 80 abdominal MRI sets along with their ground truth contours. The performance of the ACR process was evaluated using 4 contour accuracy metrics: the Dice similarity coefficient (DSC), mean distance to agreement (MDA), surface DSC, and added path length (APL) on the auto-segmented contours of the small bowel, large bowel, combined bowels, pancreas, duodenum, and stomach. Results: A portion (3%-39%) of the corrected contours became practically acceptable per the American Association of Physicists in Medicine Task Group 132 (TG-132) recommendation (DSC >0.8 and MDA <3 mm). The best correction performance was seen in the combined bowels, where for the contours with major errors (initial DSC <0.5 or MDA >8 mm), the mean DSC increased from 0.34 to 0.59, the mean MDA decreased from 7.02 mm to 5.23 mm, and the APL reduced by almost 20 mm, whereas for the contours with minor errors, the mean DSC increased from 0.72 to 0.79, the mean MDA decreased from 3.35 mm to 3.29 mm, and more than one-third (39%) of the ACR contours became clinically acceptable. The execution time for the ACR process on one subregion was less than 2 seconds using an NVIDIA GTX 1060 GPU. Conclusions: The ACR process implemented based on the ACM was able to quickly correct for some inaccurate contours produced from MRI-based deep learning auto-segmentation of complex abdominal anatomy. The ACR method may be integrated into the auto-segmentation step to accelerate the process of MRI-guided ART.

7.
Med Phys ; 49(3): 1686-1700, 2022 Mar.
Article in English | MEDLINE | ID: mdl-35094390

ABSTRACT

PURPOSE: To reduce workload and inconsistencies in organ segmentation for radiation treatment planning, we developed and evaluated general and custom autosegmentation models on computed tomography (CT) for three major tumor sites using a well-established deep convolutional neural network (DCNN). METHODS: Five CT-based autosegmentation models for 42 organs at risk (OARs) in head and neck (HN), abdomen (ABD), and male pelvis (MP) were developed using a full three-dimensional (3D) DCNN architecture. Two types of deep learning (DL) models were separately trained using either general diversified multi-institutional datasets or custom well-controlled single-institution datasets. To improve segmentation accuracy, an adaptive spatial resolution approach for small and/or narrow OARs and a pseudo scan extension approach, when CT scan length is too short to cover entire organs, were implemented. The performance of the obtained models was evaluated based on accuracy and clinical applicability of the autosegmented contours using qualitative visual inspection and quantitative calculation of dice similarity coefficient (DSC), mean distance to agreement (MDA), and time efficiency. RESULTS: The five DL autosegmentation models developed for the three anatomical sites were found to have high accuracy (DSC ranging from 0.8 to 0.98) for 74% OARs and marginally acceptable for 26% OARs. The custom models performed slightly better than the general models, even with smaller custom datasets used for the custom model training. The organ-based approaches improved autosegmentation accuracy for small or complex organs (e.g., eye lens, optic nerves, inner ears, and bowels). Compared with traditional manual contouring times, the autosegmentation times, including subsequent manual editing, if necessary, were substantially reduced by 88% for MP, 80% for HN, and 65% for ABD models. CONCLUSIONS: The obtained autosegmentation models, incorporating organ-based approaches, were found to be effective and accurate for most OARs in the male pelvis, head and neck, and abdomen. We have demonstrated that our multianatomical DL autosegmentation models are clinically useful for radiation treatment planning.


Subject(s)
Deep Learning , Head and Neck Neoplasms , Abdomen/diagnostic imaging , Head and Neck Neoplasms/diagnostic imaging , Head and Neck Neoplasms/radiotherapy , Humans , Image Processing, Computer-Assisted/methods , Male , Organs at Risk , Pelvis/diagnostic imaging , Radiotherapy Planning, Computer-Assisted/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...