Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 25
1.
Phys Med Biol ; 69(11)2024 May 29.
Article En | MEDLINE | ID: mdl-38663411

Objective. Deep-learning networks for super-resolution (SR) reconstruction enhance the spatial-resolution of 3D magnetic resonance imaging (MRI) for MR-guided radiotherapy (MRgRT). However, variations between MRI scanners and patients impact the quality of SR for real-time 3D low-resolution (LR) cine MRI. In this study, we present a personalized super-resolution (psSR) network that incorporates transfer-learning to overcome the challenges in inter-scanner SR of 3D cine MRI.Approach: Development of the proposed psSR network comprises two-stages: (1) a cohort-specific SR (csSR) network using clinical patient datasets, and (2) a psSR network using transfer-learning to target datasets. The csSR network was developed by training on breath-hold and respiratory-gated high-resolution (HR) 3D MRIs and their k-space down-sampled LR MRIs from 53 thoracoabdominal patients scanned at 1.5 T. The psSR network was developed through transfer-learning to retrain the csSR network using a single breath-hold HR MRI and a corresponding 3D cine MRI from 5 healthy volunteers scanned at 0.55 T. Image quality was evaluated using the peak-signal-noise-ratio (PSNR) and the structure-similarity-index-measure (SSIM). The clinical feasibility was assessed by liver contouring on the psSR MRI using an auto-segmentation network and quantified using the dice-similarity-coefficient (DSC).Results. Mean PSNR and SSIM values of psSR MRIs were increased by 57.2% (13.8-21.7) and 94.7% (0.38-0.74) compared to cine MRIs, with the reference 0.55 T breath-hold HR MRI. In the contour evaluation, DSC was increased by 15% (0.79-0.91). Average time consumed for transfer-learning was 90 s, psSR was 4.51 ms per volume, and auto-segmentation was 210 ms, respectively.Significance. The proposed psSR reconstruction substantially increased image and segmentation quality of cine MRI in an average of 215 ms across the scanners and patients with less than 2 min of prerequisite transfer-learning. This approach would be effective in overcoming cohort- and scanner-dependency of deep-learning for MRgRT.


Imaging, Three-Dimensional , Magnetic Resonance Imaging, Cine , Humans , Magnetic Resonance Imaging, Cine/methods , Imaging, Three-Dimensional/methods , Radiotherapy, Image-Guided/methods , Deep Learning
2.
Breast ; 73: 103599, 2024 Feb.
Article En | MEDLINE | ID: mdl-37992527

PURPOSE: To quantify interobserver variation (IOV) in target volume and organs-at-risk (OAR) contouring across 31 institutions in breast cancer cases and to explore the clinical utility of deep learning (DL)-based auto-contouring in reducing potential IOV. METHODS AND MATERIALS: In phase 1, two breast cancer cases were randomly selected and distributed to multiple institutions for contouring six clinical target volumes (CTVs) and eight OAR. In Phase 2, auto-contour sets were generated using a previously published DL Breast segmentation model and were made available for all participants. The difference in IOV of submitted contours in phases 1 and 2 was investigated quantitatively using the Dice similarity coefficient (DSC) and Hausdorff distance (HD). The qualitative analysis involved using contour heat maps to visualize the extent and location of these variations and the required modification. RESULTS: Over 800 pairwise comparisons were analysed for each structure in each case. Quantitative phase 2 metrics showed significant improvement in the mean DSC (from 0.69 to 0.77) and HD (from 34.9 to 17.9 mm). Quantitative analysis showed increased interobserver agreement in phase 2, specifically for CTV structures (5-19 %), leading to fewer manual adjustments. Underlying IOV differences causes were reported using a questionnaire and hierarchical clustering analysis based on the volume of CTVs. CONCLUSION: DL-based auto-contours improved the contour agreement for OARs and CTVs significantly, both qualitatively and quantitatively, suggesting its potential role in minimizing radiation therapy protocol deviation.


Breast Neoplasms , Deep Learning , Humans , Female , Breast Neoplasms/diagnostic imaging , Radiotherapy Planning, Computer-Assisted/methods , Organs at Risk , Breast/diagnostic imaging
4.
Comput Med Imaging Graph ; 109: 102299, 2023 10.
Article En | MEDLINE | ID: mdl-37729827

Non-invasive early detection and differentiation grading of lung adenocarcinoma using computed tomography (CT) images are clinically important for both clinicians and patients, including determining the extent of lung resection. However, these are difficult to accomplish using preoperative images, with CT-based diagnoses often being different from postoperative pathologic diagnoses. In this study, we proposed an integrated detection and classification algorithm (IDCal) for diagnosing ground-glass opacity nodules (GGN) using CT images and other patient informatics, and compared its performance with that of other diagnostic modalities. All labeling was confirmed by a thoracic surgeon by referring to the patient's CT image and biopsy report. The detection phase was implemented via a modified FC-DenseNet to contour the lesions as elaborately as possible and secure the reliability of the classification phase for subsequent applications. Then, by integrating radiomics features and other patients' general information, the lesions were dichotomously reclassified into "non-invasive" (atypical adenomatous hyperplasia, adenocarcinoma in situ, and minimally invasive adenocarcinoma) and "invasive" (invasive adenocarcinoma). Data from 168 GGN cases were used to develop the IDCal, which was then validated in 31 independent CT scans. IDCal showed a high accuracy of GGN detection (sensitivity, 0.970; false discovery rate, 0.697) and classification (accuracy, 0.97; f1-score, 0.98; ROAUC, 0.96). In conclusion, the proposed IDCal detects and classifies GGN with excellent performance. Thus, it can be suggested that our multimodal prediction model has high potential as an auxiliary diagnostic tool of GGN to help clinicians.


Adenocarcinoma of Lung , Adenocarcinoma , Lung Neoplasms , Humans , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/pathology , Reproducibility of Results , Retrospective Studies , Adenocarcinoma of Lung/diagnostic imaging , Adenocarcinoma of Lung/pathology , Adenocarcinoma/diagnostic imaging , Adenocarcinoma/pathology , Algorithms , Demography
5.
Med Phys ; 50(4): 1947-1961, 2023 Apr.
Article En | MEDLINE | ID: mdl-36310403

PURPOSE: Online adaptive radiotherapy (ART) requires accurate and efficient auto-segmentation of target volumes and organs-at-risk (OARs) in mostly cone-beam computed tomography (CBCT) images, which often have severe artifacts and lack soft-tissue contrast, making direct segmentation very challenging. Propagating expert-drawn contours from the pretreatment planning CT through traditional or deep learning (DL)-based deformable image registration (DIR) can achieve improved results in many situations. Typical DL-based DIR models are population based, that is, trained with a dataset for a population of patients, and so they may be affected by the generalizability problem. METHODS: In this paper, we propose a method called test-time optimization (TTO) to refine a pretrained DL-based DIR population model, first for each individual test patient, and then progressively for each fraction of online ART treatment. Our proposed method is less susceptible to the generalizability problem and thus can improve overall performance of different DL-based DIR models by improving model accuracy, especially for outliers. Our experiments used data from 239 patients with head-and-neck squamous cell carcinoma to test the proposed method. First, we trained a population model with 200 patients and then applied TTO to the remaining 39 test patients by refining the trained population model to obtain 39 individualized models. We compared each of the individualized models with the population model in terms of segmentation accuracy. RESULTS: The average improvement of the Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD95) of segmentation can be up to 0.04 (5%) and 0.98 mm (25%), respectively, with the individualized models compared to the population model over 17 selected OARs and a target of 39 patients. Although the average improvement may seem mild, we found that the improvement for outlier patients with structures of large anatomical changes is significant. The number of patients with at least 0.05 DSC improvement or 2 mm HD95 improvement by TTO averaged over the 17 selected structures for the state-of-the-art architecture VoxelMorph is 10 out of 39 test patients. By deriving the individualized model using TTO from the pretrained population model, TTO models can be ready in about 1 min. We also generated the adapted fractional models for each of the 39 test patients by progressively refining the individualized models using TTO to CBCT images acquired at later fractions of online ART treatment. When adapting the individualized model to a later fraction of the same patient, the model can be ready in less than a minute with slightly improved accuracy. CONCLUSIONS: The proposed TTO method is well suited for online ART and can boost segmentation accuracy for DL-based DIR models, especially for outlier patients where the pretrained models fail.


Head and Neck Neoplasms , Spiral Cone-Beam Computed Tomography , Humans , Head and Neck Neoplasms/diagnostic imaging , Head and Neck Neoplasms/radiotherapy , Radiotherapy Planning, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods , Cone-Beam Computed Tomography/methods
6.
Phys Med Biol ; 67(18)2022 09 12.
Article En | MEDLINE | ID: mdl-36093921

Objective.To establish an open framework for developing plan optimization models for knowledge-based planning (KBP).Approach.Our framework includes radiotherapy treatment data (i.e. reference plans) for 100 patients with head-and-neck cancer who were treated with intensity-modulated radiotherapy. That data also includes high-quality dose predictions from 19 KBP models that were developed by different research groups using out-of-sample data during the OpenKBP Grand Challenge. The dose predictions were input to four fluence-based dose mimicking models to form 76 unique KBP pipelines that generated 7600 plans (76 pipelines × 100 patients). The predictions and KBP-generated plans were compared to the reference plans via: the dose score, which is the average mean absolute voxel-by-voxel difference in dose; the deviation in dose-volume histogram (DVH) points; and the frequency of clinical planning criteria satisfaction. We also performed a theoretical investigation to justify our dose mimicking models.Main results.The range in rank order correlation of the dose score between predictions and their KBP pipelines was 0.50-0.62, which indicates that the quality of the predictions was generally positively correlated with the quality of the plans. Additionally, compared to the input predictions, the KBP-generated plans performed significantly better (P< 0.05; one-sided Wilcoxon test) on 18 of 23 DVH points. Similarly, each optimization model generated plans that satisfied a higher percentage of criteria than the reference plans, which satisfied 3.5% more criteria than the set of all dose predictions. Lastly, our theoretical investigation demonstrated that the dose mimicking models generated plans that are also optimal for an inverse planning model.Significance.This was the largest international effort to date for evaluating the combination of KBP prediction and optimization models. We found that the best performing models significantly outperformed the reference dose and dose predictions. In the interest of reproducibility, our data and code is freely available.


Radiotherapy Planning, Computer-Assisted , Radiotherapy, Intensity-Modulated , Humans , Knowledge Bases , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted/methods , Radiotherapy, Intensity-Modulated/methods , Reproducibility of Results
7.
Cancers (Basel) ; 14(10)2022 May 23.
Article En | MEDLINE | ID: mdl-35626158

Recently, several efforts have been made to develop the deep learning (DL) algorithms for automatic detection and segmentation of brain metastases (BM). In this study, we developed an advanced DL model to BM detection and segmentation, especially for small-volume BM. From the institutional cancer registry, contrast-enhanced magnetic resonance images of 65 patients and 603 BM were collected to train and evaluate our DL model. Of the 65 patients, 12 patients with 58 BM were assigned to test-set for performance evaluation. Ground-truth for BM was assigned to one radiation oncologist to manually delineate BM and another one to cross-check. Unlike other previous studies, our study dealt with relatively small BM, so the area occupied by the BM in the high-resolution images were small. Our study applied training techniques such as the overlapping patch technique and 2.5-dimensional (2.5D) training to the well-known U-Net architecture to learn better in smaller BM. As a DL architecture, 2D U-Net was utilized by 2.5D training. For better efficacy and accuracy of a two-dimensional U-Net, we applied effective preprocessing include 2.5D overlapping patch technique. The sensitivity and average false positive rate were measured as detection performance, and their values were 97% and 1.25 per patient, respectively. The dice coefficient with dilation and 95% Hausdorff distance were measured as segmentation performance, and their values were 75% and 2.057 mm, respectively. Our DL model can detect and segment BM with small volume with good performance. Our model provides considerable benefit for clinicians with automatic detection and segmentation of BM for stereotactic ablative radiotherapy.

8.
Radiat Oncol ; 17(1): 83, 2022 Apr 22.
Article En | MEDLINE | ID: mdl-35459221

BACKGROUND: Adjuvant radiation therapy improves the overall survival and loco-regional control in patients with breast cancer. However, radiation-induced heart disease, which occurs after treatment from incidental radiation exposure to the cardiac organ, is an emerging challenge. This study aimed to generate synthetic contrast-enhanced computed tomography (SCECT) from non-contrast CT (NCT) using deep learning (DL) and investigate its role in contouring cardiac substructures. We also aimed to determine its applicability for a retrospective study on the substructure volume-dose relationship for predicting radiation-induced heart disease. METHODS: We prepared NCT-CECT cardiac scan pairs of 59 patients. Of these, 35, 4, and 20 pairs were used for training, validation, and testing, respectively. We adopted conditional generative adversarial network as a framework to generate SCECT. SCECT was validated in the following three stages: (1) The similarity between SCECT and CECT was evaluated; (2) Manual contouring was performed on SCECT and CECT with sufficient intervals and based on this, the geometric similarity of cardiac substructures was measured between them; (3) The treatment plan was quantitatively analyzed based on the contours of SCECT and CECT. RESULTS: While the mean values (± standard deviation) of the mean absolute error, peak signal-to-noise ratio, and structural similarity index measure between SCECT and CECT were 20.66 ± 5.29, 21.57 ± 1.85, and 0.77 ± 0.06, those were 23.95 ± 6.98, 20.67 ± 2.34, and 0.76 ± 0.07 between NCT and CECT, respectively. The Dice similarity coefficients and mean surface distance between the contours of SCECT and CECT were 0.81 ± 0.06 and 2.44 ± 0.72, respectively. The dosimetry analysis displayed error rates of 0.13 ± 0.27 Gy and 0.71 ± 1.34% for the mean heart dose and V5Gy, respectively. CONCLUSION: Our findings displayed the feasibility of SCECT generation from NCT and its potential for cardiac substructure delineation in patients who underwent breast radiation therapy.


Breast Neoplasms , Heart Diseases , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/radiotherapy , Feasibility Studies , Female , Humans , Neural Networks, Computer , Retrospective Studies , Tomography, X-Ray Computed/methods
9.
Phys Med Biol ; 67(11)2022 05 24.
Article En | MEDLINE | ID: mdl-35483350

Objective.Real-time imaging is highly desirable in image-guided radiotherapy, as it provides instantaneous knowledge of patients' anatomy and motion during treatments and enables online treatment adaptation to achieve the highest tumor targeting accuracy. Due to extremely limited acquisition time, only one or few x-ray projections can be acquired for real-time imaging, which poses a substantial challenge to localize the tumor from the scarce projections. For liver radiotherapy, such a challenge is further exacerbated by the diminished contrast between the tumor and the surrounding normal liver tissues. Here, we propose a framework combining graph neural network-based deep learning and biomechanical modeling to track liver tumor in real-time from a single onboard x-ray projection.Approach.Liver tumor tracking is achieved in two steps. First, a deep learning network is developed to predict the liver surface deformation using image features learned from the x-ray projection. Second, the intra-liver deformation is estimated through biomechanical modeling, using the liver surface deformation as the boundary condition to solve tumor motion by finite element analysis. The accuracy of the proposed framework was evaluated using a dataset of 10 patients with liver cancer.Main results.The results show accurate liver surface registration from the graph neural network-based deep learning model, which translates into accurate, fiducial-less liver tumor localization after biomechanical modeling (<1.2 (±1.2) mm average localization error).Significance.The method demonstrates its potentiality towards intra-treatment and real-time 3D liver tumor monitoring and localization. It could be applied to facilitate 4D dose accumulation, multi-leaf collimator tracking and real-time plan adaptation. The method can be adapted to other anatomical sites as well.


Liver Neoplasms , Radiotherapy, Image-Guided , Humans , Liver Neoplasms/diagnostic imaging , Liver Neoplasms/radiotherapy , Neural Networks, Computer , Radiography , Radiotherapy, Image-Guided/methods , X-Rays
10.
Med Phys ; 49(1): 488-496, 2022 Jan.
Article En | MEDLINE | ID: mdl-34791672

PURPOSE: Applications of deep learning (DL) are essential to realizing an effective adaptive radiotherapy (ART) workflow. Despite the promise demonstrated by DL approaches in several critical ART tasks, there remain unsolved challenges to achieve satisfactory generalizability of a trained model in a clinical setting. Foremost among these is the difficulty of collecting a task-specific training dataset with high-quality, consistent annotations for supervised learning applications. In this study, we propose a tailored DL framework for patient-specific performance that leverages the behavior of a model intentionally overfitted to a patient-specific training dataset augmented from the prior information available in an ART workflow-an approach we term Intentional Deep Overfit Learning (IDOL). METHODS: Implementing the IDOL framework in any task in radiotherapy consists of two training stages: (1) training a generalized model with a diverse training dataset of N patients, just as in the conventional DL approach, and (2) intentionally overfitting this general model to a small training dataset-specific the patient of interest ( N + 1 ) generated through perturbations and augmentations of the available task- and patient-specific prior information to establish a personalized IDOL model. The IDOL framework itself is task-agnostic and is, thus, widely applicable to many components of the ART workflow, three of which we use as a proof of concept here: the autocontouring task on replanning CTs for traditional ART, the MRI super-resolution (SR) task for MRI-guided ART, and the synthetic CT (sCT) reconstruction task for MRI-only ART. RESULTS: In the replanning CT autocontouring task, the accuracy measured by the Dice similarity coefficient improves from 0.847 with the general model to 0.935 by adopting the IDOL model. In the case of MRI SR, the mean absolute error (MAE) is improved by 40% using the IDOL framework over the conventional model. Finally, in the sCT reconstruction task, the MAE is reduced from 68 to 22 HU by utilizing the IDOL framework. CONCLUSIONS: In this study, we propose a novel IDOL framework for ART and demonstrate its feasibility using three ART tasks. We expect the IDOL framework to be especially useful in creating personally tailored models in situations with limited availability of training data but existing prior information, which is usually true in the medical setting in general and is especially true in ART.


Deep Learning , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted
11.
Radiat Oncol ; 16(1): 203, 2021 Oct 14.
Article En | MEDLINE | ID: mdl-34649569

PURPOSE: To study the performance of a proposed deep learning-based autocontouring system in delineating organs at risk (OARs) in breast radiotherapy with a group of experts. METHODS: Eleven experts from two institutions delineated nine OARs in 10 cases of adjuvant radiotherapy after breast-conserving surgery. Autocontours were then provided to the experts for correction. Overall, 110 manual contours, 110 corrected autocontours, and 10 autocontours of each type of OAR were analyzed. The Dice similarity coefficient (DSC) and Hausdorff distance (HD) were used to compare the degree of agreement between the best manual contour (chosen by an independent expert committee) and each autocontour, corrected autocontour, and manual contour. Higher DSCs and lower HDs indicated a better geometric overlap. The amount of time reduction using the autocontouring system was examined. User satisfaction was evaluated using a survey. RESULTS: Manual contours, corrected autocontours, and autocontours had a similar accuracy in the average DSC value (0.88 vs. 0.90 vs. 0.90). The accuracy of autocontours ranked the second place, based on DSCs, and the first place, based on HDs among the manual contours. Interphysician variations among the experts were reduced in corrected autocontours, compared to variations in manual contours (DSC: 0.89-0.90 vs. 0.87-0.90; HD: 4.3-5.8 mm vs. 5.3-7.6 mm). Among the manual delineations, the breast contours had the largest variations, which improved most significantly with the autocontouring system. The total mean times for nine OARs were 37 min for manual contours and 6 min for corrected autocontours. The results of the survey revealed good user satisfaction. CONCLUSIONS: The autocontouring system had a similar performance in OARs as that of the experts' manual contouring. This system can be valuable in improving the quality of breast radiotherapy and reducing interphysician variability in clinical practice.


Breast Neoplasms/pathology , Deep Learning , Image Processing, Computer-Assisted/methods , Observer Variation , Radiation Oncologists/standards , Radiotherapy Planning, Computer-Assisted/methods , Radiotherapy, Adjuvant/methods , Breast Neoplasms/radiotherapy , Female , Humans , Organs at Risk/radiation effects , Radiotherapy Dosage , Radiotherapy, Intensity-Modulated/methods
12.
Front Vet Sci ; 8: 721612, 2021.
Article En | MEDLINE | ID: mdl-34552975

Purpose: This study was conducted to develop a deep learning-based automatic segmentation (DLBAS) model of head and neck organs for radiotherapy (RT) in dogs, and to evaluate the feasibility for delineating the RT planning. Materials and Methods: The segmentation indicated that there were potentially 15 organs at risk (OARs) in the head and neck of dogs. Post-contrast computed tomography (CT) was performed in 90 dogs. The training and validation sets comprised 80 CT data sets, including 20 test sets. The accuracy of the segmentation was assessed using both the Dice similarity coefficient (DSC) and the Hausdorff distance (HD), and by referencing the expert contours as the ground truth. An additional 10 clinical test sets with relatively large displacement or deformation of organs were selected for verification in cancer patients. To evaluate the applicability in cancer patients, and the impact of expert intervention, three methods-HA, DLBAS, and the readjustment of the predicted data obtained via the DLBAS of the clinical test sets (HA_DLBAS)-were compared. Results: The DLBAS model (in the 20 test sets) showed reliable DSC and HD values; it also had a short contouring time of ~3 s. The average (mean ± standard deviation) DSC (0.83 ± 0.04) and HD (2.71 ± 1.01 mm) values were similar to those of previous human studies. The DLBAS was highly accurate and had no large displacement of head and neck organs. However, the DLBAS in the 10 clinical test sets showed lower DSC (0.78 ± 0.11) and higher HD (4.30 ± 3.69 mm) values than those of the test sets. The HA_DLBAS was comparable to both the HA (DSC: 0.85 ± 0.06 and HD: 2.74 ± 1.18 mm) and DLBAS presented better comparison metrics and decreased statistical deviations (DSC: 0.94 ± 0.03 and HD: 2.30 ± 0.41 mm). In addition, the contouring time of HA_DLBAS (30 min) was less than that of HA (80 min). Conclusion: In conclusion, HA_DLBAS method and the proposed DLBAS was highly consistent and robust in its performance. Thus, DLBAS has great potential as a single or supportive tool to the key process in RT planning.

13.
Phys Med Biol ; 66(20)2021 10 01.
Article En | MEDLINE | ID: mdl-34530421

Objective. Owing to the superior soft tissue contrast of MRI, MRI-guided adaptive radiotherapy (ART) is well-suited to managing interfractional changes in anatomy. An MRI-only workflow is desirable, but producing synthetic CT (sCT) data through paired data-driven deep learning (DL) for abdominal dose calculations remains a challenge due to the highly variable presence of intestinal gas. We present the preliminary dosimetric evaluation of our novel approach to sCT reconstruction that is well suited to handling intestinal gas in abdominal MRI-only ART.Approach. We utilize a paired data DL approach enabled by the intensity projection prior, in which well-matching training pairs are created by propagating air from MRI to corresponding CT scans. Evaluations focus on two classes: patients with (1) little involvement of intestinal gas, and (2) notable differences in intestinal gas presence between corresponding scans. Comparisons between sCT-based plans and CT-based clinical plans for both classes are made at the first treatment fraction to highlight the dosimetric impact of the variable presence of intestinal gas.Main results. Class 1 patients (n= 13) demonstrate differences in prescribed dose coverage of the PTV of 1.3 ± 2.1% between clinical plans and sCT-based plans. Mean DVH differences in all structures for Class 1 patients are found to be statistically insignificant. In Class 2 (n= 20), target coverage is 13.3 ± 11.0% higher in the clinical plans and mean DVH differences are found to be statistically significant.Significance. Significant deviations in calculated doses arising from the variable presence of intestinal gas in corresponding CT and MRI scans result in uncertainty in high-dose regions that may limit the effectiveness of adaptive dose escalation efforts. We have proposed a paired data-driven DL approach to sCT reconstruction for accurate dose calculations in abdominal ART enabled by the creation of a clinically unavailable training data set with well-matching representations of intestinal gas.


Radiotherapy Planning, Computer-Assisted , Radiotherapy, Intensity-Modulated , Humans , Magnetic Resonance Imaging/methods , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted/methods , Radiotherapy, Intensity-Modulated/methods , Tomography, X-Ray Computed/methods
14.
Biomed Phys Eng Express ; 7(5)2021 08 18.
Article En | MEDLINE | ID: mdl-34375963

MR-guided radiotherapy (MRgRT) systems provide excellent soft tissue imaging immediately prior to and in real time during radiation delivery for cancer treatment. However, 2D cine MRI often has limited spatial resolution due to high temporal resolution. This work applies a super resolution machine learning framework to 3.5 mm pixel edge length, low resolution (LR), sagittal 2D cine MRI images acquired on a MRgRT system to generate 0.9 mm pixel edge length, super resolution (SR), images originally acquired at 4 frames per second (FPS). LR images were collected from 50 pancreatic cancer patients treated on a ViewRay MR-LINAC. SR images were evaluated using three methods. 1) The first method utilized intrinsic image quality metrics for evaluation. 2) The second used relative metrics including edge detection and structural similarity index (SSIM). 3) Finally, automatically generated tumor contours were created on both low resolution and super resolution images to evaluate target delineation and compared with DICE and SSIM. Intrinsic image quality metrics all had statistically significant improvements for SR images versus LR images, with mean (±1 SD) BRISQUE scores of 29.65 ± 2.98 and 42.48 ± 0.98 for SR and LR, respectively. SR images showed good agreement with LR images in SSIM evaluation, indicating there was not significant distortion of the images. Comparison of LR and SR images with paired high resolution (HR) 3D images showed that SR images had a mean (±1 SD) SSIM value of 0.633 ± 0.063 and LR a value of 0.587 ± 0.067 (p ≪ 0.05). Contours generated on SR images were also more robust to noise addition than those generated on LR images. This study shows that super resolution with a machine learning framework can generate high spatial resolution images from 4fps low spatial resolution cine MRI acquired on the ViewRay MR-LINAC while maintaining tumor contour quality and without significant acquisition or post processing delay.


Magnetic Resonance Imaging, Cine , Pancreatic Neoplasms , Humans , Imaging, Three-Dimensional , Machine Learning , Pancreatic Neoplasms/diagnostic imaging , Pancreatic Neoplasms
15.
Yonsei Med J ; 62(7): 569-576, 2021 Jul.
Article En | MEDLINE | ID: mdl-34164953

PURPOSE: Adjuvant radiotherapy (RT) has been performed to reduce locoregional failure (LRF) following radical cystectomy for locally advanced bladder cancer; however, its efficacy has not been well established. We analyzed the locoregional recurrence patterns of post-radical cystectomy to identify patients who could benefit from adjuvant RT and determine the optimal target volume. MATERIALS AND METHODS: We retrospectively reviewed 160 patients with stage ≥ pT3 bladder cancer who were treated with radical cystectomy between January 2006 and December 2015. The impact of pathologic findings, including the stage, lympho-vascular invasion, perineural invasion, margin status, nodal involvement, and the number of nodes removed on failure patterns, was assessed. RESULTS: Median follow-up period was 27.7 months. LRF was observed in 55 patients (34.3%), 12 of whom presented with synchronous local and regional failures as the first failure. The most common failure pattern was distant metastasis (40%). Among LRFs, the most common recurrence site was the cystectomy bed (15.6%). Patients with positive resection margins had a significantly higher recurrence rate compared to those without (28% vs. 10%, p=0.004). The pelvic nodal recurrence rate was < 5% in pN0 patients; the rate of recurrence in the external and common iliac nodes was 12.5% in pN+ patients. The rate of recurrence in the common iliac nodes was significantly higher in pN2-3 patients than in pN0-1 patients (15.2% vs. 4.4%, p=0.04). CONCLUSION: Pelvic RT could be beneficial especially for those with positive resection margins or nodal involvement after radical cystectomy. Radiation fields should be optimized based on the patient-specific risk factors.


Cystectomy , Urinary Bladder Neoplasms , Humans , Neoplasm Recurrence, Local/epidemiology , Radiation Oncologists , Retrospective Studies , Urinary Bladder Neoplasms/radiotherapy , Urinary Bladder Neoplasms/surgery
16.
PLoS One ; 16(6): e0253204, 2021.
Article En | MEDLINE | ID: mdl-34125856

Differentiating the invasiveness of ground-glass nodules (GGN) is clinically important, and several institutions have attempted to develop their own solutions by using computed tomography images. The purpose of this study is to evaluate Computer-Aided Analysis of Risk Yield (CANARY), a validated virtual biopsy and risk-stratification machine-learning tool for lung adenocarcinomas, in a Korean patient population. To this end, a total of 380 GGNs from 360 patients who underwent pulmonary resection in a single institution were reviewed. Based on the Score Indicative of Lung Cancer Aggression (SILA), a quantitative indicator of CANARY analysis results, all of the GGNs were classified as "indolent" (atypical adenomatous hyperplasia, adenocarcinomas in situ, or minimally invasive adenocarcinoma) or "invasive" (invasive adenocarcinoma) and compared with the pathology reports. By considering the possibility of uneven class distribution, statistical analysis was performed on the 1) entire cohort and 2) randomly extracted six sets of class-balanced samples. For each trial, the optimal cutoff SILA was obtained from the receiver operating characteristic curve. The classification results were evaluated using several binary classification metrics. Of a total of 380 GGNs, the mean SILA for 65 (17.1%) indolent and 315 (82.9%) invasive lesions were 0.195±0.124 and 0.391±0.208 (p < 0.0001). The area under the curve (AUC) of each trial was 0.814 and 0.809, with an optimal threshold SILA of 0.229 for both. The macro F1-score and geometric mean were found to be 0.675 and 0.745 for the entire cohort, while both scored 0.741 in the class-equalized dataset. From these results, CANARY could be confirmed acceptable in classifying GGN for Korean patients after the cutoff SILA was calibrated. We found that adjusting the cutoff SILA is needed to use CANARY in other countries or races, and geometric mean could be more objective than F1-score or AUC in the binary classification of imbalanced data.


Adenocarcinoma of Lung/diagnosis , Hyperplasia/diagnosis , Precancerous Conditions/diagnosis , Adenocarcinoma of Lung/diagnostic imaging , Adenocarcinoma of Lung/epidemiology , Adenocarcinoma of Lung/pathology , Aged , Biopsy , Diagnosis, Computer-Assisted/methods , Female , Humans , Hyperplasia/diagnostic imaging , Hyperplasia/epidemiology , Hyperplasia/pathology , Machine Learning , Male , Middle Aged , Neoplasm Invasiveness , Precancerous Conditions/diagnostic imaging , Precancerous Conditions/epidemiology , Precancerous Conditions/pathology , Republic of Korea/epidemiology , Risk Assessment , Tomography, X-Ray Computed
17.
Cancers (Basel) ; 13(4)2021 Feb 09.
Article En | MEDLINE | ID: mdl-33572310

This study investigated the feasibility of deep learning-based segmentation (DLS) and continual training for adaptive radiotherapy (RT) of head and neck (H&N) cancer. One-hundred patients treated with definitive RT were included. Based on 23 organs-at-risk (OARs) manually segmented in initial planning computed tomography (CT), modified FC-DenseNet was trained for DLS: (i) using data obtained from 60 patients, with 20 matched patients in the test set (DLSm); (ii) using data obtained from 60 identical patients with 20 unmatched patients in the test set (DLSu). Manually contoured OARs in adaptive planning CT for independent 20 patients were provided as test sets. Deformable image registration (DIR) was also performed. All 23 OARs were compared using quantitative measurements, and nine OARs were also evaluated via subjective assessment from 26 observers using the Turing test. DLSm achieved better performance than both DLSu and DIR (mean Dice similarity coefficient; 0.83 vs. 0.80 vs. 0.70), mainly for glandular structures, whose volume significantly reduced during RT. Based on subjective measurements, DLS is often perceived as a human (49.2%). Furthermore, DLSm is preferred over DLSu (67.2%) and DIR (96.7%), with a similar rate of required revision to that of manual segmentation (28.0% vs. 29.7%). In conclusion, DLS was effective and preferred over DIR. Additionally, continual DLS training is required for an effective optimization and robustness in personalized adaptive RT.

18.
Radiat Oncol ; 16(1): 44, 2021 Feb 25.
Article En | MEDLINE | ID: mdl-33632248

BACKGROUND: In breast cancer patients receiving radiotherapy (RT), accurate target delineation and reduction of radiation doses to the nearby normal organs is important. However, manual clinical target volume (CTV) and organs-at-risk (OARs) segmentation for treatment planning increases physicians' workload and inter-physician variability considerably. In this study, we evaluated the potential benefits of deep learning-based auto-segmented contours by comparing them to manually delineated contours for breast cancer patients. METHODS: CTVs for bilateral breasts, regional lymph nodes, and OARs (including the heart, lungs, esophagus, spinal cord, and thyroid) were manually delineated on planning computed tomography scans of 111 breast cancer patients who received breast-conserving surgery. Subsequently, a two-stage convolutional neural network algorithm was used. Quantitative metrics, including the Dice similarity coefficient (DSC) and 95% Hausdorff distance, and qualitative scoring by two panels from 10 institutions were used for analysis. Inter-observer variability and delineation time were assessed; furthermore, dose-volume histograms and dosimetric parameters were also analyzed using another set of patient data. RESULTS: The correlation between the auto-segmented and manual contours was acceptable for OARs, with a mean DSC higher than 0.80 for all OARs. In addition, the CTVs showed favorable results, with mean DSCs higher than 0.70 for all breast and regional lymph node CTVs. Furthermore, qualitative subjective scoring showed that the results were acceptable for all CTVs and OARs, with a median score of at least 8 (possible range: 0-10) for (1) the differences between manual and auto-segmented contours and (2) the extent to which auto-segmentation would assist physicians in clinical practice. The differences in dosimetric parameters between the auto-segmented and manual contours were minimal. CONCLUSIONS: The feasibility of deep learning-based auto-segmentation in breast RT planning was demonstrated. Although deep learning-based auto-segmentation cannot be a substitute for radiation oncologists, it is a useful tool with excellent potential in assisting radiation oncologists in the future. Trial registration Retrospectively registered.


Breast Neoplasms/radiotherapy , Deep Learning , Organs at Risk/radiation effects , Radiotherapy Planning, Computer-Assisted/methods , Adult , Aged , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/surgery , Feasibility Studies , Female , Humans , Mastectomy, Segmental , Middle Aged , Observer Variation , Organs at Risk/diagnostic imaging , Radiometry , Radiotherapy, Intensity-Modulated , Tomography, X-Ray Computed
19.
Radiother Oncol ; 153: 139-145, 2020 12.
Article En | MEDLINE | ID: mdl-32991916

Manual segmentation is the gold standard method for radiation therapy planning; however, it is time-consuming and prone to inter- and intra-observer variation, giving rise to interests in auto-segmentation methods. We evaluated the feasibility of deep learning-based auto-segmentation (DLBAS) in comparison to commercially available atlas-based segmentation solutions (ABAS) for breast cancer radiation therapy. This study used contrast-enhanced planning computed tomography scans from 62 patients with breast cancer who underwent breast-conservation surgery. Contours of target volumes (CTVs), organs, and heart substructures were generated using two commercial ABAS solutions and DLBAS using fully convolutional DenseNet. The accuracy of the segmentation was assessed using 14 test patients using the Dice Similarity Coefficient and Hausdorff Distance referencing the expert contours. A sensitivity analysis was performed using non-contrast planning CT from 14 additional patients. Compared to ABAS, the proposed DLBAS model yielded more consistent results and the highest average Dice Similarity Coefficient values and lowest Hausdorff Distances, especially for CTVs and the substructures of the heart. ABAS showed limited performance in soft-tissue-based regions, such as the esophagus, cardiac arteries, and smaller CTVs. The results of sensitivity analysis between contrast and non-contrast CT test sets showed little difference in the performance of DLBAS and conversely, a large discrepancy for ABAS. The proposed DLBAS algorithm was more consistent and robust in its performance than ABAS across the majority of structures when examining both CTVs and normal organs. DLBAS has great potential to aid a key process in the radiation therapy workflow, helping optimise and reduce the clinical workload.


Breast Neoplasms , Deep Learning , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/radiotherapy , Humans , Organs at Risk , Radiotherapy Planning, Computer-Assisted , Tomography, X-Ray Computed
20.
Med Phys ; 46(10): 4631-4638, 2019 Oct.
Article En | MEDLINE | ID: mdl-31376292

PURPOSE: The purpose of this study was to present real-time three-dimensional (3D) magnetic resonance imaging (MRI) in the presence of motion for MRI-guided radiotherapy (MRgRT) using dynamic keyhole imaging for high-temporal acquisition and super-resolution generative (SRG) model for high-spatial reconstruction. METHOD: We propose a unique real-time 3D MRI technique by combining a data sharing technique (3D dynamic keyhole imaging) with a SRG model using cascaded deep learning technique. 3D dynamic keyhole imaging utilizes the data sharing mechanism by combining keyhole central k-space data acquired in real-time with high-spatial, high-temporal resolution prior peripheral k-space data at various motion positions prepared by the SRG model. The efficacy of the 3D dynamic keyhole imaging with super-resolution (SR_dKeyhole) was compared to the ground-truth super-resolution images with the original full k-space data. It was also compared with the zero-filling reconstruction (zero-filling), conventional keyhole reconstruction with low-spatial high-temporal prior data (LR_cKeyhole), and conventional keyhole reconstruction with super-resolution prior data (SR_cKeyhole). RESULTS: High-spatial, high-temporal resolution 3D MRI datasets (1.5 × 1.5 × 6 mm3 ) were generated from low-spatial, high-temporal resolution 3D MRI datasets (6 × 6 × 6 mm3 ) using the cascaded deep learning SRG framework (<100 ms/volume). 3D dynamic keyhole imaging with the SRG model provided high-spatial, high-temporal resolution images (1.5 × 1.5 × 6 mm3 , 455 ms) with the highest similarity to the ground-truth SR images without any noticeable artifacts. Structural similarity indices (SSIM) of the reconstructed 3D MRI to the original SR 3D MRI were 0.65, 0.66, 0.86, and 0.89 for zero-filling, LR_cKeyhole, SR_cKeyhole, and SR_dKeyhole, respectively (1 for identical image). In addition, average value of image relative error (IRE) of the reconstructed 3D MRI to the original SR 3D MRI were 0.169, 0.191, 0.079, and 0.067 for zero-filling, LR_cKeyhole, SR_cKeyhole, and SR_dKeyhole, respectively (0 for identical image). CONCLUSIONS: We demonstrated that high-spatial, high-temporal resolution 3D MRI was feasible by combing 3D dynamic keyhole imaging with a SRG model in terms of image quality and imaging time. The proposed technique can be utilized for real-time 3D MRgRT.


Imaging, Three-Dimensional , Magnetic Resonance Imaging , Movement , Radiotherapy, Image-Guided , Signal-To-Noise Ratio , Air , Monte Carlo Method , Water
...