Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 27
Filter
1.
Med Phys ; 51(4): 2367-2377, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38408022

ABSTRACT

BACKGROUND: Deep learning-based unsupervised image registration has recently been proposed, promising fast registration. However, it has yet to be adopted in the online adaptive magnetic resonance imaging-guided radiotherapy (MRgRT) workflow. PURPOSE: In this paper, we design an unsupervised, joint rigid, and deformable registration framework for contour propagation in MRgRT of prostate cancer. METHODS: Three-dimensional pelvic T2-weighted MRIs of 143 prostate cancer patients undergoing radiotherapy were collected and divided into 110, 13, and 20 patients for training, validation, and testing. We designed a framework using convolutional neural networks (CNNs) for rigid and deformable registration. We selected the deformable registration network architecture among U-Net, MS-D Net, and LapIRN and optimized the training strategy (end-to-end vs. sequential). The framework was compared against an iterative baseline registration. We evaluated registration accuracy (the Dice and Hausdorff distance of the prostate and bladder contours), structural similarity index, and folding percentage to compare the methods. We also evaluated the framework's robustness to rigid and elastic deformations and bias field perturbations. RESULTS: The end-to-end trained framework comprising LapIRN for the deformable component achieved the best median (interquartile range) prostate and bladder Dice of 0.89 (0.85-0.91) and 0.86 (0.80-0.91), respectively. This accuracy was comparable to the iterative baseline registration: prostate and bladder Dice of 0.91 (0.88-0.93) and 0.86 (0.80-0.92). The best models complete rigid and deformable registration in 0.002 (0.0005) and 0.74 (0.43) s (Nvidia Tesla V100-PCIe 32 GB GPU), respectively. We found that the models are robust to translations up to 52 mm, rotations up to 15 ∘ $^\circ$ , elastic deformations up to 40 mm, and bias fields. CONCLUSIONS: Our proposed unsupervised, deep learning-based registration framework can perform rigid and deformable registration in less than a second with contour propagation accuracy comparable with iterative registration.


Subject(s)
Deep Learning , Prostatic Neoplasms , Male , Humans , Prostate/diagnostic imaging , Prostate/pathology , Pelvis , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/radiotherapy , Prostatic Neoplasms/pathology , Radiotherapy Planning, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Image Processing, Computer-Assisted/methods , Algorithms
2.
Med Phys ; 2023 Dec 08.
Article in English | MEDLINE | ID: mdl-38063208

ABSTRACT

BACKGROUND: Magnetic resonance imaging (MRI) provides state-of-the-art image quality for neuroimaging, consisting of multiple separately acquired contrasts. Synthetic MRI aims to accelerate examinations by synthesizing any desirable contrast from a single acquisition. PURPOSE: We developed a physics-informed deep learning-based method to synthesize multiple brain MRI contrasts from a single 5-min acquisition and investigate its ability to generalize to arbitrary contrasts. METHODS: A dataset of 55 subjects acquired with a clinical MRI protocol and a 5-min transient-state sequence was used. The model, based on a generative adversarial network, maps data acquired from the five-minute scan to "effective" quantitative parameter maps (q*-maps), feeding the generated PD, T1 , and T2 maps into a signal model to synthesize four clinical contrasts (proton density-weighted, T1 -weighted, T2 -weighted, and T2 -weighted fluid-attenuated inversion recovery), from which losses are computed. The synthetic contrasts are compared to an end-to-end deep learning-based method proposed by literature. The generalizability of the proposed method is investigated for five volunteers by synthesizing three contrasts unseen during training and comparing these to ground truth acquisitions via qualitative assessment and contrast-to-noise ratio (CNR) assessment. RESULTS: The physics-informed method matched the quality of the end-to-end method for the four standard contrasts, with structural similarity metrics above 0.75 ± 0.08 (±std), peak signal-to-noise ratios above 22.4 ± 1.9, representing a portion of compact lesions comparable to standard MRI. Additionally, the physics-informed method enabled contrast adjustment, and similar signal contrast and comparable CNRs to the ground truth acquisitions for three sequences unseen during model training. CONCLUSIONS: The study demonstrated the feasibility of physics-informed, deep learning-based synthetic MRI to generate high-quality contrasts and generalize to contrasts beyond the training data. This technology has the potential to accelerate neuroimaging protocols.

3.
Med Phys ; 50(9): 5331-5342, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37527331

ABSTRACT

BACKGROUND: Respiratory-resolved four-dimensional magnetic resonance imaging (4D-MRI) provides essential motion information for accurate radiation treatments of mobile tumors. However, obtaining high-quality 4D-MRI suffers from long acquisition and reconstruction times. PURPOSE: To develop a deep learning architecture to quickly acquire and reconstruct high-quality 4D-MRI, enabling accurate motion quantification for MRI-guided radiotherapy (MRIgRT). METHODS: A small convolutional neural network called MODEST is proposed to reconstruct 4D-MRI by performing a spatial and temporal decomposition, omitting the need for 4D convolutions to use all the spatio-temporal information present in 4D-MRI. This network is trained on undersampled 4D-MRI after respiratory binning to reconstruct high-quality 4D-MRI obtained by compressed sensing reconstruction. The network is trained, validated, and tested on 4D-MRI of 28 lung cancer patients acquired with a T1-weighted golden-angle radial stack-of-stars (GA-SOS) sequence. The 4D-MRI of 18, 5, and 5 patients were used for training, validation, and testing. Network performances are evaluated on image quality measured by the structural similarity index (SSIM) and motion consistency by comparing the position of the lung-liver interface on undersampled 4D-MRI before and after respiratory binning. The network is compared to conventional architectures such as a U-Net, which has 30 times more trainable parameters. RESULTS: MODEST can reconstruct high-quality 4D-MRI with higher image quality than a U-Net, despite a thirty-fold reduction in trainable parameters. High-quality 4D-MRI can be obtained using MODEST in approximately 2.5 min, including acquisition, processing, and reconstruction. CONCLUSION: High-quality accelerated 4D-MRI can be obtained using MODEST, which is particularly interesting for MRIgRT.


Subject(s)
Lung Neoplasms , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Motion , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/radiotherapy , Neural Networks, Computer , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods
4.
Phys Med ; 112: 102642, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37473612

ABSTRACT

BACKGROUND: Synthetic computed tomography (sCT) has been proposed and increasingly clinically adopted to enable magnetic resonance imaging (MRI)-based radiotherapy. Deep learning (DL) has recently demonstrated the ability to generate accurate sCT from fixed MRI acquisitions. However, MRI protocols may change over time or differ between centres resulting in low-quality sCT due to poor model generalisation. PURPOSE: investigating domain randomisation (DR) to increase the generalisation of a DL model for brain sCT generation. METHODS: CT and corresponding T1-weighted MRI with/without contrast, T2-weighted, and FLAIR MRI from 95 patients undergoing RT were collected, considering FLAIR the unseen sequence where to investigate generalisation. A "Baseline" generative adversarial network was trained with/without the FLAIR sequence to test how a model performs without DR. Image similarity and accuracy of sCT-based dose plans were assessed against CT to select the best-performing DR approach against the Baseline. RESULTS: The Baseline model had the poorest performance on FLAIR, with mean absolute error (MAE) = 106 ± 20.7 HU (mean ±σ). Performance on FLAIR significantly improved for the DR model with MAE = 99.0 ± 14.9 HU, but still inferior to the performance of the Baseline+FLAIR model (MAE = 72.6 ± 10.1 HU). Similarly, an improvement in γ-pass rate was obtained for DR vs Baseline. CONCLUSION: DR improved image similarity and dose accuracy on the unseen sequence compared to training only on acquired MRI. DR makes the model more robust, reducing the need for re-training when applying a model on sequences unseen and unavailable for retraining.


Subject(s)
Deep Learning , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Radiotherapy Planning, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Radiotherapy Dosage , Brain/diagnostic imaging
5.
Med Phys ; 50(7): 4664-4674, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37283211

ABSTRACT

PURPOSE: Medical imaging has become increasingly important in diagnosing and treating oncological patients, particularly in radiotherapy. Recent advances in synthetic computed tomography (sCT) generation have increased interest in public challenges to provide data and evaluation metrics for comparing different approaches openly. This paper describes a dataset of brain and pelvis computed tomography (CT) images with rigidly registered cone-beam CT (CBCT) and magnetic resonance imaging (MRI) images to facilitate the development and evaluation of sCT generation for radiotherapy planning. ACQUISITION AND VALIDATION METHODS: The dataset consists of CT, CBCT, and MRI of 540 brains and 540 pelvic radiotherapy patients from three Dutch university medical centers. Subjects' ages ranged from 3 to 93 years, with a mean age of 60. Various scanner models and acquisition settings were used across patients from the three data-providing centers. Details are available in a comma separated value files provided with the datasets. DATA FORMAT AND USAGE NOTES: The data is available on Zenodo (https://doi.org/10.5281/zenodo.7260704, https://doi.org/10.5281/zenodo.7868168) under the SynthRAD2023 collection. The images for each subject are available in nifti format. POTENTIAL APPLICATIONS: This dataset will enable the evaluation and development of image synthesis algorithms for radiotherapy purposes on a realistic multi-center dataset with varying acquisition protocols. Synthetic CT generation has numerous applications in radiation therapy, including diagnosis, treatment planning, treatment monitoring, and surgical planning.


Subject(s)
Image Processing, Computer-Assisted , Radiotherapy, Image-Guided , Humans , Child, Preschool , Child , Adolescent , Young Adult , Adult , Middle Aged , Aged , Aged, 80 and over , Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Cone-Beam Computed Tomography , Pelvis , Radiotherapy, Image-Guided/methods , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted/methods , Magnetic Resonance Imaging/methods
6.
Radiother Oncol ; 179: 109456, 2023 02.
Article in English | MEDLINE | ID: mdl-36592740

ABSTRACT

BACKGROUND: Post-operative radiosurgery (SRS) of brain metastases patients is typically planned on a post-recovery MRI, 2-4 weeks after resection. However, the intracranial metastasis may (re-)grow in this period. Planning SRS directly on the post-operative MRI enables shortening this time interval, anticipating the start of adjuvant systemic therapy, and so decreasing the chance of extracranial progression. The MRI-Linac (MRL) allows the simultaneous execution of the post-operative MRI and SRS treatment. The aim of this work was investigating the dosimetric feasibility of MRL-based post-operative SRS. METHODS: MRL treatments based on the direct post-operative MRI were simulated, including thirteen patients with resectable single brain metastases. The gross tumor volume (GTV) was contoured on the direct post-operative scans and compared to the post-recovery MRI GTV. Three plans for each patient were created: a non-coplanar VMAT CT-Linac plan (ncVMAT) and a coplanar IMRT MRL plan (cIMRT) on the direct post-operative MRI, and a ncVMAT plan on the post-recovery MRI as the current clinical standard. RESULTS: Between the direct post-operative and post-recovery MRI, 15.5 % of the cavities shrunk by > 2 cc, and 46 % expanded by ≥ 2 cc. Although the direct post-operative cIMRT plans had a higher median gradient index (3.6 vs 2.7) and median V3Gy of the skin (18.4 vs 1.1 cc) compared to ncVMAT plans, they were clinically acceptable. CONCLUSION: Direct post-operative MRL-based SRS for resection cavities of brain metastases is dosimetrically acceptable, with the advantages of increased patient comfort and logistics. Clinical benefit of this workflow should be investigated given the dosimetric plausibility.


Subject(s)
Brain Neoplasms , Radiosurgery , Humans , Feasibility Studies , Radiotherapy Planning, Computer-Assisted , Radiotherapy Dosage , Brain Neoplasms/secondary , Magnetic Resonance Imaging
7.
Med Image Anal ; 80: 102509, 2022 08.
Article in English | MEDLINE | ID: mdl-35688047

ABSTRACT

Convolutional neural networks (CNNs) are increasingly adopted in medical imaging, e.g., to reconstruct high-quality images from undersampled magnetic resonance imaging (MRI) acquisitions or estimate subject motion during an examination. MRI is naturally acquired in the complex domain C, obtaining magnitude and phase information in k-space. However, CNNs in complex regression tasks are almost exclusively trained to minimize the L2 loss or maximizing the magnitude structural similarity (SSIM), which are possibly not optimal as they do not take full advantage of the magnitude and phase information present in the complex domain. This work identifies that minimizing the L2 loss in the complex field has an asymmetry in the magnitude/phase loss landscape and is biased, underestimating the reconstructed magnitude. To resolve this, we propose a new loss function for regression in the complex domain called ⊥-loss, which adds a novel phase term to established magnitude loss functions, e.g., L2 or SSIM. We show ⊥-loss is symmetric in the magnitude/phase domain and has favourable properties when applied to regression in the complex domain. Specifically, we evaluate the ⊥+ℓ2-loss and ⊥+SSIM-loss for complex undersampled MR image reconstruction tasks and MR image registration tasks. We show that training a model to minimize the ⊥+ℓ2-loss outperforms models trained to minimize the L2 loss and results in similar performance compared to models trained to maximize the magnitude SSIM while offering high-quality phase reconstruction. Moreover, ⊥-loss is defined in Rn, and we apply the loss function to the R2 domain by learning 2D deformation vector fields for image registration. We show that a model trained to minimize the ⊥+ℓ2-loss outperforms models trained to minimize the end-point error loss.


Subject(s)
Deep Learning , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Neural Networks, Computer
8.
Med Phys ; 48(11): 6597-6613, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34525223

ABSTRACT

PURPOSE: To enable real-time adaptive magnetic resonance imaging-guided radiotherapy (MRIgRT) by obtaining time-resolved three-dimensional (3D) deformation vector fields (DVFs) with high spatiotemporal resolution and low latency ( < 500  ms). Theory and Methods: Respiratory-resolved T 1 -weighted 4D-MRI of 27 patients with lung cancer were acquired using a golden-angle radial stack-of-stars readout. A multiresolution convolutional neural network (CNN) called TEMPEST was trained on up to 32 × retrospectively undersampled MRI of 17 patients, reconstructed with a nonuniform fast Fourier transform, to learn optical flow DVFs. TEMPEST was validated using 4D respiratory-resolved MRI, a digital phantom, and a physical motion phantom. The time-resolved motion estimation was evaluated in-vivo using two volunteer scans, acquired on a hybrid MR-scanner with integrated linear accelerator. Finally, we evaluated the model robustness on a publicly-available four-dimensional computed tomography (4D-CT) dataset. RESULTS: TEMPEST produced accurate DVFs on respiratory-resolved MRI at 20-fold acceleration, with the average end-point-error < 2  mm, both on respiratory-sorted MRI and on a digital phantom. TEMPEST estimated accurate time-resolved DVFs on MRI of a motion phantom, with an error < 2  mm at 28 × undersampling. On two volunteer scans, TEMPEST accurately estimated motion compared to the self-navigation signal using 50 spokes per dynamic (366 × undersampling). At this undersampling factor, DVFs were estimated within 200 ms, including MRI acquisition. On fully sampled CT data, we achieved a target registration error of 1.87 ± 1.65 mm without retraining the model. CONCLUSION: A CNN trained on undersampled MRI produced accurate 3D DVFs with high spatiotemporal resolution for MRIgRT.


Subject(s)
Magnetic Resonance Imaging , Neural Networks, Computer , Humans , Imaging, Three-Dimensional , Motion , Phantoms, Imaging , Respiration , Retrospective Studies
9.
Clin Transl Radiat Oncol ; 31: 28-33, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34522796

ABSTRACT

PURPOSE: Optic nerves are part of the craniospinal irradiation (CSI) target volume. Modern radiotherapy techniques achieve highly conformal target doses while avoiding organs-at-risk such as the lens. The magnitude of eye movement and its influence on CSI target- and avoidance volumes are unclear. We aimed to evaluate the movement-range of lenses and optic nerves and its influence on dose distribution of several planning techniques. METHODS: Ten volunteers underwent MRI scans in various gaze directions (neutral, left, right, cranial, caudal). Lenses, orbital optic nerves, optic discs and CSI target volumes were delineated. 36-Gy cranial irradiation plans were constructed on synthetic CT images in neutral gaze, with Volumetric Modulated Arc Therapy, pencil-beam scanning proton therapy, and 3D-conventional photons. Movement-amplitudes of lenses and optic discs were analyzed, and influence of gaze direction on lens and orbital optic nerve dose distribution. RESULTS: Mean eye structures' shift from neutral position was greatest in caudal gaze; -5.8±1.2 mm (±SD) for lenses and 7.0±2.0 mm for optic discs. In 3D-conventional plans, caudal gaze decreased Mean Lens Dose (MLD). In VMAT and proton plans, eye movements mainly increased MLD and diminished D98 orbital optic nerve (D98OON) coverage; mean MLD increased up to 5.5 Gy [total ΔMLD range -8.1 to 10.0 Gy], and mean D98OON decreased up to 3.3 Gy [total ΔD98OON range -13.6 to 1.2 Gy]. VMAT plans optimized for optic disc Internal Target Volume and lens Planning organ-at-Risk Volume resulted in higher MLD over gaze directions. D98OON became ≥95% of prescribed dose over 95/100 evaluated gaze directions, while all-gaze bilateral D98OON significantly changed in 1 of 10 volunteers. CONCLUSION: With modern CSI techniques, eye movements result in higher lens doses and a mean detriment for orbital optic nerve dose coverage of <10% of prescribed dose.

10.
Med Phys ; 48(11): 6537-6566, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34407209

ABSTRACT

Recently,deep learning (DL)-based methods for the generation of synthetic computed tomography (sCT) have received significant research attention as an alternative to classical ones. We present here a systematic review of these methods by grouping them into three categories, according to their clinical applications: (i) to replace computed tomography in magnetic resonance (MR) based treatment planning, (ii) facilitate cone-beam computed tomography based image-guided adaptive radiotherapy, and (iii) derive attenuation maps for the correction of positron emission tomography. Appropriate database searching was performed on journal articles published between January 2014 and December 2020. The DL methods' key characteristics were extracted from each eligible study, and a comprehensive comparison among network architectures and metrics was reported. A detailed review of each category was given, highlighting essential contributions, identifying specific challenges, and summarizing the achievements. Lastly, the statistics of all the cited works from various aspects were analyzed, revealing the popularity and future trends and the potential of DL-based sCT generation. The current status of DL-based sCT generation was evaluated, assessing the clinical readiness of the presented methods.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Positron-Emission Tomography , Radiotherapy Planning, Computer-Assisted , Tomography, X-Ray Computed
11.
Insects ; 12(6)2021 Jun 09.
Article in English | MEDLINE | ID: mdl-34207548

ABSTRACT

Aprostocetus fukutai is a specialist egg parasitoid of the citrus longhorned beetle Anoplophora chinensis, a high-risk invasive pest of hardwood trees. The parasitoid overwinters as diapausing mature larvae within the host egg and emerges in early summer in synchrony with the egg-laying peak of A. chinensis. This study investigated the parasitoid's diapause survival in parasitized host eggs that either remained in potted trees under semi-natural conditions in southern France or were removed from the wood and held at four different humidities (44, 75, 85-93 and 100% RH) at 11 °C or four different temperature regimes (2, 5, 10 and 12.5 °C) at 100% RH in the laboratory. The temperature regimes reflect overwintering temperatures across the parasitoid's geographical distribution in its native range. Results show that the parasitoid resumed its development to the adult stage at normal rearing conditions (22 °C, 100% RH, 14L:10D) after 6- or 7-months cold chilling at both the semi-natural and laboratory conditions. It had a low survival rate (36.7%) on potted plants due to desiccation or tree wound defense response. No parasitoids survived at 44% RH, but survival rate increased with humidity, reaching the highest (93.7%) at 100% RH. Survival rate also increased from 21.0% at 2 °C to 82.8% at 12.5 °C. Post-diapause developmental time decreased with increased humidity or temperature. There was no difference in the lifetime fecundity of emerged females from 2 and 12.5 °C. These results suggest that 100% RH and 12.5 °C are the most suitable diapause conditions for laboratory rearing of this parasitoid.

12.
Radiother Oncol ; 153: 197-204, 2020 12.
Article in English | MEDLINE | ID: mdl-32976877

ABSTRACT

BACKGROUND AND PURPOSE: To enable accurate magnetic resonance imaging (MRI)-based dose calculations, synthetic computed tomography (sCT) images need to be generated. We aim at assessing the feasibility of dose calculations from MRI acquired with a heterogeneous set of imaging protocol for paediatric patients affected by brain tumours. MATERIALS AND METHODS: Sixty paediatric patients undergoing brain radiotherapy were included. MR imaging protocols varied among patients, and data heterogeneity was maintained in train/validation/test sets. Three 2D conditional generative adversarial networks (cGANs) were trained to generate sCT from T1-weighted MRI, considering the three orthogonal planes and its combination (multi-plane sCT). For each patient, median and standard deviation (σ) of the three views were calculated, obtaining a combined sCT and a proxy for uncertainty map, respectively. The sCTs were evaluated against the planning CT in terms of image similarity and accuracy for photon and proton dose calculations. RESULTS: A mean absolute error of 61 ± 14 HU (mean±1σ) was obtained in the intersection of the body contours between CT and sCT. The combined multi-plane sCTs performed better than sCTs from any single plane. Uncertainty maps highlighted that multi-plane sCTs differed at the body contours and air cavities. A dose difference of -0.1 ± 0.3% and 0.1 ± 0.4% was obtained on the D > 90% of the prescribed dose and mean γ2%,2mm pass-rate of 99.5 ± 0.8% and 99.2 ± 1.1% for photon and proton planning, respectively. CONCLUSION: Accurate MR-based dose calculation using a combination of three orthogonal planes for sCT generation is feasible for paediatric brain cancer patients, even when training on a heterogeneous dataset.


Subject(s)
Deep Learning , Protons , Brain , Child , Humans , Magnetic Resonance Imaging , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted , Tomography, X-Ray Computed
13.
Radiat Oncol ; 15(1): 104, 2020 May 11.
Article in English | MEDLINE | ID: mdl-32393280

ABSTRACT

BACKGROUND: Structure delineation is a necessary, yet time-consuming manual procedure in radiotherapy. Recently, convolutional neural networks have been proposed to speed-up and automatise this procedure, obtaining promising results. With the advent of magnetic resonance imaging (MRI)-guided radiotherapy, MR-based segmentation is becoming increasingly relevant. However, the majority of the studies investigated automatic contouring based on computed tomography (CT). PURPOSE: In this study, we investigate the feasibility of clinical use of deep learning-based automatic OARs delineation on MRI. MATERIALS AND METHODS: We included 150 patients diagnosed with prostate cancer who underwent MR-only radiotherapy. A three-dimensional (3D) T1-weighted dual spoiled gradient-recalled echo sequence was acquired with 3T MRI for the generation of the synthetic-CT. The first 48 patients were included in a feasibility study training two 3D convolutional networks called DeepMedic and dense V-net (dV-net) to segment bladder, rectum and femurs. A research version of an atlas-based software was considered for comparison. Dice similarity coefficient, 95% Hausdorff distances (HD95), and mean distances were calculated against clinical delineations. For eight patients, an expert RTT scored the quality of the contouring for all the three methods. A choice among the three approaches was made, and the chosen approach was retrained on 97 patients and implemented for automatic use in the clinical workflow. For the successive 53 patients, Dice, HD95 and mean distances were calculated against the clinically used delineations. RESULTS: DeepMedic, dV-net and the atlas-based software generated contours in 60 s, 4 s and 10-15 min, respectively. Performances were higher for both the networks compared to the atlas-based software. The qualitative analysis demonstrated that delineation from DeepMedic required fewer adaptations, followed by dV-net and the atlas-based software. DeepMedic was clinically implemented. After retraining DeepMedic and testing on the successive patients, the performances slightly improved. CONCLUSION: High conformality for OARs delineation was achieved with two in-house trained networks, obtaining a significant speed-up of the delineation procedure. Comparison of different approaches has been performed leading to the succesful adoption of one of the neural networks, DeepMedic, in the clinical workflow. DeepMedic maintained in a clinical setting the accuracy obtained in the feasibility study.


Subject(s)
Deep Learning , Magnetic Resonance Imaging/methods , Prostatic Neoplasms/radiotherapy , Radiotherapy Planning, Computer-Assisted/methods , Humans , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Male , Organs at Risk
14.
Phys Med Biol ; 65(15): 155015, 2020 08 07.
Article in English | MEDLINE | ID: mdl-32408295

ABSTRACT

To enable magnetic resonance imaging (MRI)-guided radiotherapy with real-time adaptation, motion must be quickly estimated with low latency. The motion estimate is used to adapt the radiation beam to the current anatomy, yielding a more conformal dose distribution. As the MR acquisition is the largest component of latency, deep learning (DL) may reduce the total latency by enabling much higher undersampling factors compared to conventional reconstruction and motion estimation methods. The benefit of DL on image reconstruction and motion estimation was investigated for obtaining accurate deformation vector fields (DVFs) with high temporal resolution and minimal latency. 2D cine MRI acquired at 1.5 T from 135 abdominal cancer patients were retrospectively included in this study. Undersampled radial golden angle acquisitions were retrospectively simulated. DVFs were computed using different combinations of conventional- and DL-based methods for image reconstruction and motion estimation, allowing a comparison of four approaches to achieve real-time motion estimation. The four approaches were evaluated based on the end-point-error and root-mean-square error compared to a ground-truth optical flow estimate on fully-sampled images, the structural similarity (SSIM) after registration and time necessary to acquire k-space, reconstruct an image and estimate motion. The lowest DVF error and highest SSIM were obtained using conventional methods up to [Formula: see text]. For undersampling factors [Formula: see text], the lowest DVF error and highest SSIM were obtained using conventional image reconstruction and DL-based motion estimation. We have found that, with this combination, accurate DVFs can be obtained up to [Formula: see text] with an average root-mean-square error up to 1 millimeter and an SSIM greater than 0.8 after registration, taking 60 milliseconds. High-quality 2D DVFs from highly undersampled k-space can be obtained with a high temporal resolution with conventional image reconstruction and a deep learning-based motion estimation approach for real-time adaptive MRI-guided radiotherapy.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging, Cine , Movement , Radiotherapy, Image-Guided , Abdominal Neoplasms/diagnostic imaging , Abdominal Neoplasms/physiopathology , Abdominal Neoplasms/radiotherapy , Humans , Retrospective Studies , Time Factors
15.
Phys Imaging Radiat Oncol ; 14: 24-31, 2020 Apr.
Article in English | MEDLINE | ID: mdl-33458310

ABSTRACT

Background and purpose Adaptive radiotherapy based on cone-beam computed tomography (CBCT) requires high CT number accuracy to ensure accurate dose calculations. Recently, deep learning has been proposed for fast CBCT artefact corrections on single anatomical sites. This study investigated the feasibility of applying a single convolutional network to facilitate dose calculation based on CBCT for head-and-neck, lung and breast cancer patients. Materials and Methods Ninety-nine patients diagnosed with head-and-neck, lung or breast cancer undergoing radiotherapy with CBCT-based position verification were included in this study. The CBCTs were registered to planning CT according to clinical procedures. Three cycle-consistent generative adversarial networks (cycle-GANs) were trained in an unpaired manner on 15 patients per anatomical site generating synthetic-CTs (sCTs). Another network was trained with all the anatomical sites together. Performances of all four networks were compared and evaluated for image similarity against rescan CT (rCT). Clinical plans were recalculated on rCT and sCT and analysed through voxel-based dose differences and γ -analysis. Results A sCT was generated in 10 s. Image similarity was comparable between models trained on different anatomical sites and a single model for all sites. Mean dose differences < 0.5 % were obtained in high-dose regions. Mean gamma (3%, 3 mm) pass-rates > 95 % were achieved for all sites. Conclusion Cycle-GAN reduced CBCT artefacts and increased similarity to CT, enabling sCT-based dose calculations. A single network achieved CBCT-based dose calculation generating synthetic CT for head-and-neck, lung, and breast cancer patients with similar performance to a network specifically trained for each anatomical site.

16.
Magn Reson Med ; 83(4): 1429-1441, 2020 04.
Article in English | MEDLINE | ID: mdl-31593328

ABSTRACT

PURPOSE: To study the influence of gradient echo-based contrasts as input channels to a 3D patch-based neural network trained for synthetic CT (sCT) generation in canine and human populations. METHODS: Magnetic resonance images and CT scans of human and canine pelvic regions were acquired and paired using nonrigid registration. Magnitude MR images and Dixon reconstructed water, fat, in-phase and opposed-phase images were obtained from a single T1 -weighted multi-echo gradient-echo acquisition. From this set, 6 input configurations were defined, each containing 1 to 4 MR images regarded as input channels. For each configuration, a UNet-derived deep learning model was trained for synthetic CT generation. Reconstructed Hounsfield unit maps were evaluated with peak SNR, mean absolute error, and mean error. Dice similarity coefficient and surface distance maps assessed the geometric fidelity of bones. Repeatability was estimated by replicating the training up to 10 times. RESULTS: Seventeen canines and 23 human subjects were included in the study. Performance and repeatability of single-channel models were dependent on the TE-related water-fat interference with variations of up to 17% in mean absolute error, and variations of up to 28% specifically in bones. Repeatability, Dice similarity coefficient, and mean absolute error were statistically significantly better in multichannel models with mean absolute error ranging from 33 to 40 Hounsfield units in humans and from 35 to 47 Hounsfield units in canines. CONCLUSION: Significant differences in performance and robustness of deep learning models for synthetic CT generation were observed depending on the input. In-phase images outperformed opposed-phase images, and Dixon reconstructed multichannel inputs outperformed single-channel inputs.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted , Animals , Dogs , Humans , Magnetic Resonance Imaging , Neural Networks, Computer , Tomography, X-Ray Computed
17.
Phys Med Biol ; 64(22): 225004, 2019 11 15.
Article in English | MEDLINE | ID: mdl-31610527

ABSTRACT

In presence of inter-fractional anatomical changes, clinical benefits are anticipated from image-guided adaptive radiotherapy. Nowadays, cone-beam CT (CBCT) imaging is mostly utilized during pre-treatment imaging for position verification. Due to various artifacts, image quality is typically not sufficient for photon or proton dose calculation, thus demanding accurate CBCT correction, as potentially provided by deep learning techniques. This work aimed at investigating the feasibility of utilizing a cycle-consistent generative adversarial network (cycleGAN) for prostate CBCT correction using unpaired training. Thirty-three patients were included. The network was trained to translate uncorrected, original CBCT images (CBCTorg) into planning CT equivalent images (CBCTcycleGAN). HU accuracy was determined by comparison to a previously validated CBCT correction technique (CBCTcor). Dosimetric accuracy was inferred for volumetric-modulated arc photon therapy (VMAT) and opposing single-field uniform dose (OSFUD) proton plans, optimized on CBCTcor and recalculated on CBCTcycleGAN. Single-sided SFUD proton plans were utilized to assess proton range accuracy. The mean HU error of CBCTcycleGAN with respect to CBCTcor decreased from 24 HU for CBCTorg to -6 HU. Dose calculation accuracy was high for VMAT, with average pass-rates of 100%/89% for a 2%/1% dose difference criterion. For proton OSFUD plans, the average pass-rate for a 2% dose difference criterion was 80%. Using a (2%, 2 mm) gamma criterion, the pass-rate was 96%. 93% of all analyzed SFUD profiles had a range agreement better than 3 mm. CBCT correction time was reduced from 6-10 min for CBCTcor to 10 s for CBCTcycleGAN. Our study demonstrated the feasibility of utilizing a cycleGAN for CBCT correction, achieving high dose calculation accuracy for VMAT. For proton therapy, further improvements may be required. Due to unpaired training, the approach does not rely on anatomically consistent training data or potentially inaccurate deformable image registration. The substantial speed-up for CBCT correction renders the method particularly interesting for adaptive radiotherapy.


Subject(s)
Cone-Beam Computed Tomography , Image Processing, Computer-Assisted/methods , Photons , Proton Therapy , Radiation Dosage , Radiotherapy Planning, Computer-Assisted/methods , Artifacts , Deep Learning , Humans , Male , Radiometry , Radiotherapy Dosage , Radiotherapy, Intensity-Modulated
18.
Med Phys ; 46(9): 4095-4104, 2019 Sep.
Article in English | MEDLINE | ID: mdl-31206701

ABSTRACT

PURPOSE: To develop and evaluate a patch-based convolutional neural network (CNN) to generate synthetic computed tomography (sCT) images for magnetic resonance (MR)-only workflow for radiotherapy of head and neck tumors. A patch-based deep learning method was chosen to improve robustness to abnormal anatomies caused by large tumors, surgical excisions, or dental artifacts. In this study, we evaluate whether the generated sCT images enable accurate MR-based dose calculations in the head and neck region. METHODS: We conducted a retrospective study on 34 patients with head and neck cancer who underwent both CT and MR imaging for radiotherapy treatment planning. To generate the sCTs, a large field-of-view T2-weighted Turbo Spin Echo MR sequence was used from the clinical protocol for multiple types of head and neck tumors. To align images as well as possible on a voxel-wise level, CT scans were nonrigidly registered to the MR (CTreg ). The CNN was based on a U-net architecture and consisted of 14 layers with 3 × 3 × 3 filters. Patches of 48 × 48 × 48 were randomly extracted and fed into the training. sCTs were created for all patients using threefold cross validation. For each patient, the clinical CT-based treatment plan was recalculated on sCT using Monaco TPS (Elekta). We evaluated mean absolute error (MAE) and mean error (ME) within the body contours and dice scores in air and bone mask. Also, dose differences and gamma pass rates between CT- and sCT-based plans inside the body contours were calculated. RESULTS: sCT generation took 4 min per patient. The MAE over the patient population of the sCT within the intersection of body contours was 75 ± 9 Hounsfield Units (HU) (±1 SD), and the ME was 9 ± 11 HU. Dice scores of the air and bone masks (CTreg vs sCT) were 0.79 ± 0.08 and 0.70 ± 0.07, respectively. Dosimetric analysis showed mean deviations of -0.03% ± 0.05% for dose within the body contours and -0.07% ± 0.22% inside the >90% dose volume. Dental artifacts obscuring the CT could be circumvented in the sCT by the CNN-based approach in combination with Turbo Spin Echo (TSE) magnetic resonance imaging (MRI) sequence that typically is less prone to susceptibility artifacts. CONCLUSIONS: The presented CNN generated sCTs from conventional MR images without adding scan time to the acquisition. Dosimetric evaluation suggests that dose calculations performed on the sCTs are accurate, and can therefore be used for MR-only radiotherapy treatment planning of the head and neck.


Subject(s)
Head and Neck Neoplasms/diagnostic imaging , Head and Neck Neoplasms/radiotherapy , Image Processing, Computer-Assisted , Neural Networks, Computer , Tomography, X-Ray Computed , Humans , Magnetic Resonance Imaging , Radiometry , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted , Radiotherapy, Intensity-Modulated
19.
Int J Radiat Oncol Biol Phys ; 102(4): 801-812, 2018 11 15.
Article in English | MEDLINE | ID: mdl-30108005

ABSTRACT

PURPOSE: This work aims to facilitate a fast magnetic resonance (MR)-only workflow for radiation therapy of intracranial tumors. Here, we evaluate whether synthetic computed tomography (sCT) images generated with a dilated convolutional neural network (CNN) enable accurate MR-based dose calculations in the brain. METHODS AND MATERIALS: We conducted a retrospective study of 52 patients with brain tumors who underwent both computed tomography (CT) and MR imaging for radiation therapy treatment planning. To generate the sCTs, a T1-weighted gradient echo MR sequence was selected from the clinical protocol for multiple types of brain tumors. sCTs were created for all 52 patients with a dilated CNN using 2-fold cross validation; in each fold, 26 patients were used for training and the remaining 26 patients were used for evaluation. For each patient, the clinical CT-based treatment plan was recalculated on sCT. We calculated dose differences and gamma pass rates between CT- and sCT-based plans inside body and planning target volume. Geometric fidelity of the sCT and differences in beam depth and equivalent path length were assessed between both treatment plans. RESULTS: sCT generation took 1 minute per patient. Over the patient population, the mean absolute error of the sCT within the intersection of body contours was 67 ± 11 HU (±1 standard deviation [SD], range: 51-117 HU), and the mean error was 13 ± 9 HU (±1 SD, range: -2 to 38 HU). Dosimetric analysis showed mean deviations of 0.00% ± 0.02% (±1 SD, range: -0.05 to 0.03) for dose within the body contours and -0.13% ± 0.39% (±1 SD, range: -1.43 to 0.80) inside the planning target volume. Mean γ1mm/1% was 98.8% ± 2.2% for doses >50% of the prescribed dose. CONCLUSIONS: The presented dilated CNN generated sCTs from conventional MR images without adding scan time to the acquisition. Dosimetric evaluation suggests that dose calculations performed on the sCTs are accurate and can therefore be used for MR-only intracranial radiation therapy treatment planning.


Subject(s)
Brain Neoplasms/radiotherapy , Magnetic Resonance Imaging/methods , Neural Networks, Computer , Radiotherapy Planning, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Brain Neoplasms/diagnostic imaging , Humans , Radiotherapy Dosage , Retrospective Studies
20.
Phys Med Biol ; 63(18): 185001, 2018 09 10.
Article in English | MEDLINE | ID: mdl-30109989

ABSTRACT

To enable magnetic resonance (MR)-only radiotherapy and facilitate modelling of radiation attenuation in humans, synthetic CT (sCT) images need to be generated. Considering the application of MR-guided radiotherapy and online adaptive replanning, sCT generation should occur within minutes. This work aims at assessing whether an existing deep learning network can rapidly generate sCT images for accurate MR-based dose calculations in the entire pelvis. A study was conducted on data of 91 patients with prostate (59), rectal (18) and cervical (14) cancer who underwent external beam radiotherapy acquiring both CT and MRI for patients' simulation. Dixon reconstructed water, fat and in-phase images obtained from a conventional dual gradient-recalled echo sequence were used to generate sCT images. A conditional generative adversarial network (cGAN) was trained in a paired fashion on 2D transverse slices of 32 prostate cancer patients. The trained network was tested on the remaining patients to generate sCT images. For 30 patients in the test set, dose recalculations of the clinical plan were performed on sCT images. Dose distributions were evaluated comparing voxel-based dose differences, gamma and dose-volume histogram (DVH) analysis. The sCT generation required 5.6 s and 21 s for a single patient volume on a GPU and CPU, respectively. On average, sCT images resulted in a higher dose to the target of maximum 0.3%. The average gamma pass rates using the 3%, 3 mm and 2%, 2 mm criteria were above 97 and 91%, respectively, for all volumes of interests considered. All DVH points calculated on sCT differed less than ±2.5% from the corresponding points on CT. Results suggest that accurate MR-based dose calculation using sCT images generated with a cGAN trained on prostate cancer patients is feasible for the entire pelvis. The sCT generation was sufficiently fast for integration in an MR-guided radiotherapy workflow.


Subject(s)
Magnetic Resonance Imaging/methods , Pelvis/diagnostic imaging , Radiotherapy Planning, Computer-Assisted/methods , Radiotherapy, Image-Guided/methods , Tomography, X-Ray Computed/methods , Female , Humans , Male , Prostatic Neoplasms/radiotherapy , Radiotherapy Dosage , Radiotherapy, Intensity-Modulated/methods , Uterine Cervical Neoplasms/radiotherapy
SELECTION OF CITATIONS
SEARCH DETAIL
...