Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 102
Filter
1.
Med Phys ; 2024 May 14.
Article in English | MEDLINE | ID: mdl-38742774

ABSTRACT

BACKGROUND: Proton arc therapy (PAT) has emerged as a promising approach for improving dose distribution, but also enabling simpler and faster treatment delivery in comparison to conventional proton treatments. However, the delivery speed achievable in proton arc relies on dedicated algorithms, which currently do not generate plans with a clear speed-up and sometimes even result in increased delivery time. PURPOSE: This study aims to address the challenge of minimizing delivery time through a hybrid method combining a fast geometry-based energy layer (EL) pre-selection with a dose-based EL filtering, and comparing its performance to a baseline approach without filtering. METHODS: Three methods of EL filtering were developed: unrestricted, switch-up (SU), and switch-up gap (SU gap) filtering. The unrestricted method filters the lowest weighted EL while the SU gap filtering removes the EL around a new SU to minimize the gantry rotation braking. The SU filtering removes the lowest weighted group of EL that includes a SU. These filters were combined with the RayStation dynamic proton arc optimization framework energy layer selection and spot assignment (ELSA). Four bilateral oropharyngeal and four lung cancer patients' data were used for evaluation. Objective function values, target coverage robustness, organ-at-risk doses and normal tissue complication probability evaluations, as well as comparisons to intensity-modulated proton therapy (IMPT) plans, were used to assess plan quality. RESULTS: The SU gap filtering algorithm performed best in five out of the eight cases, maintaining plan quality within tolerance while reducing beam delivery time, in particular for the oropharyngeal cohort. It achieved up to approximately 22% and 15% reduction in delivery time for oropharyngeal and lung treatment sites, respectively. The unrestricted filtering algorithm followed closely. In contrast, the SU filtering showed limited improvement, suppressing one or two SU without substantial delivery time shortening. Robust target coverage was kept within 1% of variation compared to the PAT baseline plan while organs-at-risk doses slightly decreased or kept about the same for all patients. CONCLUSIONS: This study provides insights to accelerate PAT delivery without compromising plan quality. These advancements could enhance treatment efficiency and patient throughput.

2.
Comput Biol Med ; 171: 108139, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38394800

ABSTRACT

Proton arc therapy (PAT) is an advanced radiotherapy technique using charged particles in which the radiation device rotates continuously around the patient while irradiating the tumor. Compared to conventional, fixed-angle beam delivery mode, proton arc therapy has the potential to further improve the quality of cancer treatment by delivering accurate radiation dose to tumors while minimizing damage to surrounding healthy tissues. However, the computational complexity of treatment planning in PAT raises challenges as to its effective implementation. In this paper, we demonstrate that designing a PAT plan through algorithmic methods is a NP-hard problem (in fact, NP-complete), where the problem size is determined by the number of discrete irradiation angles from which the radiation can be delivered. This finding highlights the inherent complexity of PAT treatment planning and emphasizes the need for efficient algorithms and heuristics to address the challenges associated with optimizing the delivery of radiation doses in this context.


Subject(s)
Neoplasms , Proton Therapy , Radiotherapy, Intensity-Modulated , Humans , Protons , Radiotherapy, Intensity-Modulated/methods , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted/methods , Proton Therapy/methods , Neoplasms/radiotherapy , Algorithms
4.
Med Phys ; 51(1): 485-493, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37942953

ABSTRACT

BACKGROUND: Dose calculation and optimization algorithms in proton therapy treatment planning often have high computational requirements regarding time and memory. This can hinder the implementation of efficient workflows in clinics and prevent the use of new, elaborate treatment techniques aiming to improve clinical outcomes like robust optimization, arc, and adaptive proton therapy. PURPOSE: A new method, namely, the beamlet-free algorithm, is presented to address the aforementioned issue by combining Monte Carlo dose calculation and optimization into a single algorithm and omitting the calculation of the time-consuming and costly dose influence matrix. METHODS: The beamlet-free algorithm simulates the dose in proton batches of randomly chosen spots and evaluates their relative impact on the objective function at each iteration. Based on the approximated gradient, the spot weights are then updated and used to generate a new spot probability distribution. The beamlet-free method is compared against a conventional, beamlet-based treatment planning algorithm on a brain case and a prostate case. RESULTS: The beamlet-free algorithm maintained a comparable plan quality while largely reducing the dependence of computation time and memory usage on the number of spots. CONCLUSION: The implementation of a beamlet-free treatment planning algorithm for proton therapy is feasible and capable of achieving treatment plans of comparable quality to conventional methods. Its efficient usage of computational resources and low spot dependence makes it a promising method for large plans, robust optimization, and arc proton therapy.


Subject(s)
Proton Therapy , Radiotherapy, Intensity-Modulated , Male , Humans , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted/methods , Algorithms , Monte Carlo Method , Radiotherapy, Intensity-Modulated/methods
5.
Phys Med ; 116: 103178, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38000099

ABSTRACT

PURPOSE: Ethos proposes a template-based automatic dose planning (Etb) for online adaptive radiotherapy. This study evaluates the general performance of Etb for prostate cancer, as well as the ability to generate patient-optimal plans, by comparing it with another state-of-the-art automatic planning method, i.e., deep learning dose prediction followed by dose mimicking (DP + DM). MATERIALS: General performances and capability to produce patient-optimal plan were investigated through two studies: Study-S1 generated plans for 45 patients using our initial Ethos clinical goals template (EG_init), and compared them to manually generated plans (MG). For study-S2, 10 patients which showed poor performances at study-S1 were selected. S2 compared the quality of plans generated with four different methods: 1) Ethos initial template (EG_init_selected), 2) Ethos updated template-based on S1 results (EG_upd_selected), 3) DP + DM, and 4) MG plans. RESULTS: EG_init plans showed satisfactory performance for dose level above 50 Gy: reported mean metrics differences (EG_init minus MG) never exceeded 0.6 %. However, lower dose levels showed loosely optimized metrics, mean differences for V30Gy to rectum and V20Gy to anal canal were of 6.6 % and 13.0 %. EG_init_selected showed amplified differences in V30Gy to rectum and V20Gy to anal canal: 8.5 % and 16.9 %, respectively. These dropped to 5.7 % and 11.5 % for EG_upd_selected plans but strongly increased V60Gy to rectum for 2 patients. DP + DM plans achieved differences of 3.4 % and 4.6 % without compromising any V60Gy. CONCLUSION: General performances of Etb were satisfactory. However, optimizing with template of goals might be limiting for some complex cases. Over our test patients, DP + DM outperformed the Etb approach.


Subject(s)
Radiotherapy Planning, Computer-Assisted , Radiotherapy, Intensity-Modulated , Male , Humans , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted/methods , Rectum , Pelvis , Anal Canal , Radiotherapy, Intensity-Modulated/methods , Organs at Risk
6.
Med Phys ; 50(10): 6554-6568, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37676906

ABSTRACT

PURPOSE: An accurate estimation of range uncertainties is essential to exploit the potential of proton therapy. According to Paganetti's study, a value of 2.4% (1.5 standard deviation) is currently recommended for planning robust treatments with Monte Carlo dose engines. This number is based on a dominant contribution from the mean excitation energy of tissues. However, it was recently shown that expressing tissues as a mixture of water and "dry" material in the CT calibration process allowed for a significant reduction of this uncertainty. We thus propose an adapted framework for pencil beam scanning robust optimization. First, we move towards a spot-specific range uncertainty (SSRU) determination. Second, we use the water-based formalism to reduce range uncertainties and, potentially, to spare better the organs at risk. METHODS: The stoichiometric calibration was adapted to provide a molecular decomposition (including water) of each voxel of the CT. The SSRU calculation was implemented in MCsquare, a fast Monte Carlo dose engine dedicated to proton therapy. For each spot, a ray-tracing method was used to propagate molecular I-values uncertainties and obtain the corresponding effective range uncertainty. These were then combined with other sources of range uncertainties, according to Paganetti's study of 2012. The method was then assessed on three head-and-neck patients. Two plans were optimized for each patient: the first one with the classical 2.4% flat range uncertainty (FRU), the second one with the variable range uncertainty. Both plans were then compared in terms of target coverage and OAR mean dose reduction. Robustness evaluations were also performed, using the SSRU for both plans in order to simulate errors as realistically as possible. RESULTS: For patient 1, it was found that the median SSRU was 1.04% (1.5 standard deviation), yielding, therefore, a very large reduction from the 2.4% FRU. All three SSRU plans were found to have a very good robustness level at a 90% confidence interval while sparing OAR better than the classical plan. For instance, in nominal cases, average reductions in the mean dose of 15.7, 8.4, and 13.2% were observed in the left parotid, right parotid, and pharyngeal constrictor muscle, respectively. As expected, the classical plans showed a higher but unnecessary level of robustness. CONCLUSIONS: Promising results of the SSRU framework were observed on three head-and-neck cases, and more patients should now be considered. The method could also benefit to other tumor sites and, in the long run, the variable part of the range uncertainty could be generalized to other sources of uncertainty in order to move towards more and more patient-specific treatments.


Subject(s)
Head and Neck Neoplasms , Proton Therapy , Radiotherapy, Intensity-Modulated , Humans , Proton Therapy/methods , Uncertainty , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted/methods , Radiotherapy, Intensity-Modulated/methods , Water , Organs at Risk
7.
Med Phys ; 50(10): 6201-6214, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37140481

ABSTRACT

BACKGROUND: In cancer care, determining the most beneficial treatment technique is a key decision affecting the patient's survival and quality of life. Patient selection for proton therapy (PT) over conventional radiotherapy (XT) currently entails comparing manually generated treatment plans, which requires time and expertise. PURPOSE: We developed an automatic and fast tool, AI-PROTIPP (Artificial Intelligence Predictive Radiation Oncology Treatment Indication to Photons/Protons), that assesses quantitatively the benefits of each therapeutic option. Our method uses deep learning (DL) models to directly predict the dose distributions for a given patient for both XT and PT. By using models that estimate the Normal Tissue Complication Probability (NTCP), namely the likelihood of side effects to occur for a specific patient, AI-PROTIPP can propose a treatment selection quickly and automatically. METHODS: A database of 60 patients presenting oropharyngeal cancer, obtained from the Cliniques Universitaires Saint Luc in Belgium, was used in this study. For every patient, a PT plan and an XT plan were generated. The dose distributions were used to train the two dose DL prediction models (one for each modality). The model is based on U-Net architecture, a type of convolutional neural network currently considered as the state of the art for dose prediction models. A NTCP protocol used in the Dutch model-based approach, including grades II and III xerostomia and grades II and III dysphagia, was later applied in order to perform automatic treatment selection for each patient. The networks were trained using a nested cross-validation approach with 11-folds. We set aside three patients in an outer set and each fold consists of 47 patients in training, five in validation and five for testing. This method allowed us to assess our method on 55 patients (five patients per test times the number of folds). RESULTS: The treatment selection based on the DL-predicted doses reached an accuracy of 87.4% for the threshold parameters set by the Health Council of the Netherlands. The selected treatment is directly linked with these threshold parameters as they express the minimal gain brought by the PT treatment for a patient to be indicated to PT. To validate the performance of AI-PROTIPP in other conditions, we modulated these thresholds, and the accuracy was above 81% for all the considered cases. The difference in average cumulative NTCP per patient of predicted and clinical dose distributions is very similar (less than 1% difference). CONCLUSIONS: AI-PROTIPP shows that using DL dose prediction in combination with NTCP models to select PT for patients is feasible and can help to save time by avoiding the generation of treatment plans only used for the comparison. Moreover, DL models are transferable, allowing, in the future, experience to be shared with centers that would not have PT planning expertise.


Subject(s)
Deep Learning , Oropharyngeal Neoplasms , Proton Therapy , Radiotherapy, Intensity-Modulated , Humans , Proton Therapy/adverse effects , Proton Therapy/methods , Patient Selection , Artificial Intelligence , Quality of Life , Radiotherapy Planning, Computer-Assisted/methods , Organs at Risk/radiation effects , Oropharyngeal Neoplasms/radiotherapy , Probability , Radiotherapy Dosage , Radiotherapy, Intensity-Modulated/methods
8.
Phys Med Biol ; 67(24)2022 12 13.
Article in English | MEDLINE | ID: mdl-36541505

ABSTRACT

Objective. Proton arc therapy (PAT) is a new delivery technique that exploits the continuous rotation of the gantry to distribute the therapeutic dose over many angular windows instead of using a few static fields, as in conventional (intensity-modulated) proton therapy. Although coming along with many potential clinical and dosimetric benefits, PAT has also raised a new optimization challenge. In addition to the dosimetric goals, the beam delivery time (BDT) needs to be considered in the objective function. Considering this bi-objective formulation, the task of finding a good compromise with appropriate weighting factors can turn out to be cumbersome.Approach. We have computed Pareto-optimal plans for three disease sites: a brain, a lung, and a liver, following a method of iteratively choosing weight vectors to approximate the Pareto front with few points. Mixed-integer programming (MIP) was selected to state the bi-criteria PAT problem and to find Pareto optimal points with a suited solver.Main results. The trade-offs between plan quality and beam irradiation time (staticBDT) are investigated by inspecting three plans from the Pareto front. The latter are carefully picked to demonstrate significant differences in dose distribution and delivery time depending on their location on the frontier. The results were benchmarked against IMPT and SPArc plans showing the strength of degrees of freedom coming along with MIP optimization.Significance. This paper presents for the first time the application of bi-criteria optimization to the PAT problem, which eventually permits the planners to select the best treatment strategy according to the patient conditions and clinical resources available.


Subject(s)
Proton Therapy , Radiotherapy, Intensity-Modulated , Humans , Proton Therapy/methods , Protons , Radiotherapy Planning, Computer-Assisted/methods , Radiometry , Radiotherapy, Intensity-Modulated/methods , Radiotherapy Dosage
9.
Radiother Oncol ; 176: 101-107, 2022 11.
Article in English | MEDLINE | ID: mdl-36167194

ABSTRACT

BACKGROUND AND PURPOSE: This study aims to investigate how accurate our deep learning (DL) dose prediction models for intensity modulated radiotherapy (IMRT) and pencil beam scanning (PBS) treatments, when chained with normal tissue complication probability (NTCP) models, are at identifying esophageal cancer patients who are at high risk of toxicity and should be switched to proton therapy (PT). MATERIALS AND METHODS: Two U-Net were created, for photon (XT) and proton (PT) plans, respectively. To estimate the dose distribution for each patient, they were trained on a database of 40 uniformly planned patients using cross validation and a circulating test set. These models were combined with a NTCP model for postoperative pulmonary complications. The NTCP model used the mean lung dose, age, histology type, and body mass index as predicting variables. The treatment choice is then done by using a ΔNTCP threshold between XT and PT plans. Patients with ΔNTCP ≥ 10% were referred to PT. RESULTS: Our DL models succeed in predicting dose distributions with a mean error on the mean dose to the lungs (MLD) of 1.14 ± 0.93% for XT and 0.66 ± 0.48% for PT. The complete automated workflow (DL chained with NTCP) achieved 100% accuracy in patient referral. The average residual (ΔNTCP ground truth - ΔNTCP predicted) is 1.43 ± 1.49%. CONCLUSION: This study evaluates our DL dose prediction models in a broader patient referral context and demonstrates their ability to support clinical decisions.


Subject(s)
Decision Support Systems, Clinical , Deep Learning , Esophageal Neoplasms , Proton Therapy , Radiotherapy, Intensity-Modulated , Humans , Radiotherapy Planning, Computer-Assisted , Radiotherapy, Intensity-Modulated/adverse effects , Proton Therapy/adverse effects , Probability , Esophageal Neoplasms/radiotherapy , Radiotherapy Dosage
10.
Comput Biol Med ; 148: 105609, 2022 09.
Article in English | MEDLINE | ID: mdl-35803749

ABSTRACT

Arc proton therapy (ArcPT) is an emerging modality in cancer treatments. It delivers the proton beams following a sequence of irradiation angles while the gantry is continuously rotating around the patient. Compared to conventional proton treatments (intensity modulated proton therapy, IMPT), the number of beams is significantly increased bringing new degrees of freedom that leads to potentially better cancer care. However, the optimization of such treatment plans becomes more complex and several alternative statements of the problem can be considered and compared in order to solve the ArcPT problem. Three such problem statements, distinct in their mathematical formulation and properties, are investigated and applied to solving the ArcPT optimization problem. They make use of (i) fast iterative shrinkage-thresholding algorithm (FISTA), (ii) local search (LS) and (iii) mixed-integer programming (MIP). The treatment plans obtained with those methods are compared among them, but also with IMPT and an existing state-of-the-art method: Spot-Scanning Proton Arc (SPArc). MIP stands out at low scale problems both in terms of dose quality and time delivery efficiency. FISTA shows high dose quality but experiences difficulty to optimize the energy sequence while LS is mostly the antagonist. This detailed study describes independent approaches to solve the ArcPT problem and depending on the clinical case, one should be cautiously picked rather than the other. This paper gives the first formal definition of the problem at stake, as well as a first reference benchmark. Finally, empirical conclusions are drawn, based on realistic assumptions.


Subject(s)
Proton Therapy , Radiotherapy, Intensity-Modulated , Algorithms , Humans , Protons , Radiotherapy Planning, Computer-Assisted
11.
Phys Med Biol ; 67(11)2022 05 27.
Article in English | MEDLINE | ID: mdl-35421855

ABSTRACT

The interest in machine learning (ML) has grown tremendously in recent years, partly due to the performance leap that occurred with new techniques of deep learning, convolutional neural networks for images, increased computational power, and wider availability of large datasets. Most fields of medicine follow that popular trend and, notably, radiation oncology is one of those that are at the forefront, with already a long tradition in using digital images and fully computerized workflows. ML models are driven by data, and in contrast with many statistical or physical models, they can be very large and complex, with countless generic parameters. This inevitably raises two questions, namely, the tight dependence between the models and the datasets that feed them, and the interpretability of the models, which scales with its complexity. Any problems in the data used to train the model will be later reflected in their performance. This, together with the low interpretability of ML models, makes their implementation into the clinical workflow particularly difficult. Building tools for risk assessment and quality assurance of ML models must involve then two main points: interpretability and data-model dependency. After a joint introduction of both radiation oncology and ML, this paper reviews the main risks and current solutions when applying the latter to workflows in the former. Risks associated with data and models, as well as their interaction, are detailed. Next, the core concepts of interpretability, explainability, and data-model dependency are formally defined and illustrated with examples. Afterwards, a broad discussion goes through key applications of ML in workflows of radiation oncology as well as vendors' perspectives for the clinical implementation of ML.


Subject(s)
Radiation Oncology , Machine Learning , Neural Networks, Computer
12.
Phys Med ; 89: 93-103, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34358755

ABSTRACT

INTRODUCTION: Monte Carlo (MC) algorithms provide accurate modeling of dose calculation by simulating the delivery and interaction of many particles through patient geometry. Fast MC simulations using large number of particles are desirable as they can lead to reliable clinical decisions. In this work, we assume that faster simulations with fewer particles can approximate slower ones by denoising them with deep learning. MATERIALS AND METHODS: We use mean squared error (MSE) as loss function to train networks (sNet and dUNet), with 2.5D and 3D setups considering volumes of 7 and 24 slices. Our models are trained on proton therapy MC dose distributions of six different tumor sites acquired from 50 patients. We provide networks with input MC dose distributions simulated using 1 × 106 particles while keeping 1 × 109 particles as reference. RESULTS: On average over 10 new patients with different tumor sites, in 2.5D and 3D, our models recover relative residual error on target volume, ΔD95TV of 0.67 ± 0.43% and 1.32 ± 0.87% for sNet vs. 0.83 ± 0.53% and 1.66 ± 0.98% for dUNet, compared to the noisy input at 12.40 ± 4.06%. Moreover, the denoising time for a dose distribution is: < 9s and  < 1s for sNet vs. < 16s and  < 1.5s for dUNet in 2.5D and 3D, in comparison to about 100 min (MC simulation using 1 × 109 particles). CONCLUSION: We propose a fast framework that can successfully denoise MC dose distributions. Starting from MC doses with 1 × 106 particles only, the networks provide comparable results as MC doses with1 × 109 particles, reducing simulation time significantly.


Subject(s)
Neoplasms , Proton Therapy , Algorithms , Humans , Monte Carlo Method , Neoplasms/radiotherapy , Neural Networks, Computer , Phantoms, Imaging , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted
13.
Phys Med ; 83: 242-256, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33979715

ABSTRACT

Artificial intelligence (AI) has recently become a very popular buzzword, as a consequence of disruptive technical advances and impressive experimental results, notably in the field of image analysis and processing. In medicine, specialties where images are central, like radiology, pathology or oncology, have seized the opportunity and considerable efforts in research and development have been deployed to transfer the potential of AI to clinical applications. With AI becoming a more mainstream tool for typical medical imaging analysis tasks, such as diagnosis, segmentation, or classification, the key for a safe and efficient use of clinical AI applications relies, in part, on informed practitioners. The aim of this review is to present the basic technological pillars of AI, together with the state-of-the-art machine learning methods and their application to medical imaging. In addition, we discuss the new trends and future research directions. This will help the reader to understand how AI methods are now becoming an ubiquitous tool in any medical image analysis workflow and pave the way for the clinical implementation of AI-based solutions.


Subject(s)
Artificial Intelligence , Radiology , Algorithms , Machine Learning , Technology
14.
Phys Med ; 83: 52-63, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33713919

ABSTRACT

PURPOSE: To investigate the effect of data quality and quantity on the performance of deep learning (DL) models, for dose prediction of intensity-modulated radiotherapy (IMRT) of esophageal cancer. MATERIAL AND METHODS: Two databases were used: a variable database (VarDB) with 56 clinical cases extracted retrospectively, including user-dependent variability in delineation and planning, different machines and beam configurations; and a homogenized database (HomDB), created to reduce this variability by re-contouring and re-planning all patients with a fixed class-solution protocol. Experiment 1 analysed the user-dependent variability, using 26 patients planned with the same machine and beam setup (E26-VarDB versus E26-HomDB). Experiment 2 increased the training set by groups of 10 patients (E16, E26, E36, E46, and E56) for both databases. Model evaluation metrics were the mean absolute error (MAE) for selected dose-volume metrics and the global MAE for all body voxels. RESULTS: For Experiment 1, E26-HomDB reduced the MAE for the considered dose-volume metrics compared to E26-VarDB (e.g. reduction of 0.2 Gy for D95-PTV, 1.2 Gy for Dmean-heart or 3.3% for V5-lungs). For Experiment 2, increasing the database size slightly improved performance for HomDB models (e.g. decrease in global MAE of 0.13 Gy for E56-HomDB versus E26-HomDB), but increased the error for the VarDB models (e.g. increase in global MAE of 0.20 Gy for E56-VarDB versus E26-VarDB). CONCLUSION: A small database may suffice to obtain good DL prediction performance, provided that homogenous training data is used. Data variability reduces the performance of DL models, which is further pronounced when increasing the training set.


Subject(s)
Deep Learning , Esophageal Neoplasms , Radiotherapy, Intensity-Modulated , Data Accuracy , Esophageal Neoplasms/radiotherapy , Humans , Organs at Risk , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted , Retrospective Studies
15.
Comput Biol Med ; 131: 104269, 2021 04.
Article in English | MEDLINE | ID: mdl-33639352

ABSTRACT

In radiation therapy, a CT image is used to manually delineate the organs and plan the treatment. During the treatment, a cone beam CT (CBCT) is often acquired to monitor the anatomical modifications. For this purpose, automatic organ segmentation on CBCT is a crucial step. However, manual segmentations on CBCT are scarce, and models trained with CT data do not generalize well to CBCT images. We investigate adversarial networks and intensity-based data augmentation, two strategies leveraging large databases of annotated CTs to train neural networks for segmentation on CBCT. Adversarial networks consist of a 3D U-Net segmenter and a domain classifier. The proposed framework is aimed at encouraging the learning of filters producing more accurate segmentations on CBCT. Intensity-based data augmentation consists in modifying the training CT images to reduce the gap between CT and CBCT distributions. The proposed adversarial networks reach DSCs of 0.787, 0.447, and 0.660 for the bladder, rectum, and prostate respectively, which is an improvement over the DSCs of 0.749, 0.179, and 0.629 for "source only" training. Our brightness-based data augmentation reaches DSCs of 0.837, 0.701, and 0.734, which outperforms the morphons registration algorithms for the bladder (0.813) and rectum (0.653), while performing similarly on the prostate (0.731). The proposed adversarial training framework can be used for any segmentation application where training and test distributions differ. Our intensity-based data augmentation can be used for CBCT segmentation to help achieve the prescribed dose on target and lower the dose delivered to healthy organs.


Subject(s)
Cone-Beam Computed Tomography , Image Processing, Computer-Assisted , Algorithms , Humans , Male , Pelvis , Prostate , Radiotherapy Planning, Computer-Assisted
16.
Hum Brain Mapp ; 41(18): 5164-5175, 2020 12 15.
Article in English | MEDLINE | ID: mdl-32845057

ABSTRACT

Anatomical brain templates are commonly used as references in neurological MRI studies, for bringing data into a common space for group-level statistics and coordinate reporting. Given the inherent variability in brain morphology across age and geography, it is important to have templates that are as representative as possible for both age and population. A representative-template increases the accuracy of alignment, decreases distortions as well as potential biases in final coordinate reports. In this study, we developed and validated a new set of T1w Indian brain templates (IBT) from a large number of brain scans (total n = 466) acquired across different locations and multiple 3T MRI scanners in India. A new tool in AFNI, make_template_dask.py, was created to efficiently make five age-specific IBTs (ages 6-60 years) as well as maximum probability map (MPM) atlases for each template; for each age-group's template-atlas pair, there is both a "population-average" and a "typical" version. Validation experiments on an independent Indian structural and functional-MRI dataset show the appropriateness of IBTs for spatial normalization of Indian brains. The results indicate significant structural differences when comparing the IBTs and MNI template, with these differences being maximal along the Anterior-Posterior and Inferior-Superior axes, but minimal Left-Right. For each age-group, the MPM brain atlases provide reasonably good representation of the native-space volumes in the IBT space, except in a few regions with high intersubject variability. These findings provide evidence to support the use of age and population-specific templates in human brain mapping studies.


Subject(s)
Algorithms , Atlases as Topic , Brain/anatomy & histology , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Neuroimaging/methods , Adolescent , Adult , Child , Female , Humans , India , Male , Middle Aged , Retrospective Studies , Young Adult
17.
Med Phys ; 47(7): 2746-2754, 2020 Jul.
Article in English | MEDLINE | ID: mdl-32155667

ABSTRACT

PURPOSE: Robust optimization is a computational expensive process resulting in long plan computation times. This issue is especially critical for moving targets as these cases need a large number of uncertainty scenarios to robustly optimize their treatment plans. In this study, we propose a novel worst-case robust optimization algorithm, called dynamic minimax, that accelerates the conventional minimax optimization. Dynamic minimax optimization aims at speeding up the plan optimization process by decreasing the number of evaluated scenarios in the optimization. METHODS: For a given pool of scenarios (e.g., 63 = 7 setup  × 3 range  × 3 breathing phases), the proposed dynamic minimax algorithm only considers a reduced number of candidate-worst scenarios, selected from the full 63 scenario set. These scenarios are updated throughout the optimization by randomly sampling new scenarios according to a hidden variable P, called the "probability acceptance function," which associates with each scenario the probability of it being selected as the worst case. By doing so, the algorithm favors scenarios that are mostly "active," that is, frequently evaluated as the worst case. Additionally, unconsidered scenarios have the possibility to be re-considered, later on in the optimization, depending on the convergence towards a particular solution. The proposed algorithm was implemented in the open-source robust optimizer MIROpt and tested for six four-dimensional (4D) IMPT lung tumor patients with various tumor sizes and motions. Treatment plans were evaluated by performing comprehensive robustness tests (simulating range errors, systematic setup errors, and breathing motion) using the open-source Monte Carlo dose engine MCsquare. RESULTS: The dynamic minimax algorithm achieved an optimization time gain of 84%, on average. The dynamic minimax optimization results in a significantly noisier optimization process due to the fact that more scenarios are accessed in the optimization. However, the increased noise level does not harm the final quality of the plan. In fact, the plan quality is similar between dynamic and conventional minimax optimization with regard to target coverage and normal tissue sparing: on average, the difference in worst-case D95 is 0.2 Gy and the difference in mean lung dose and mean heart dose is 0.4 and 0.1 Gy, respectively (evaluated in the nominal scenario). CONCLUSIONS: The proposed worst-case 4D robust optimization algorithm achieves a significant optimization time gain of 84%, without compromising target coverage or normal tissue sparing.


Subject(s)
Proton Therapy , Radiotherapy, Intensity-Modulated , Algorithms , Humans , Monte Carlo Method , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted
18.
Med Phys ; 47(2): 681-692, 2020 Feb.
Article in English | MEDLINE | ID: mdl-31660623

ABSTRACT

PURPOSE: Due to the increasing complexity of IMRT/IMPT treatments, quality assurance (QA) is essential to verify the quality of the dose distribution actually delivered. In this context, Monte Carlo (MC) simulations are more and more often used to verify the accuracy of the treatment planning system (TPS). The most common method of dose comparison is the γ-test, which combines dose difference and distance-to-agreement (DTA) criteria. However, this method is known to be dependent on the noise level in dose distributions. We propose here a method to correct the bias of the γ passing rate (GPR) induced by MC noise. METHODS: The GPR amplitude was studied as a function of the MC noise level. A model of this noise effect was mathematically derived. This model was then used to predict the time-consuming low-noise GPR by fitting multiple fast MC dose calculations. MC dose maps with a noise level between 2% and 20% were computed, and the GPR was predicted at a noise level of 0.3%. Due to the asymmetry of the γ-test, two different cases were considered: the MC dose was first set as reference dose, then as evaluated dose in the γ-test. Our method was applied on six proton therapy plans including analytical doses from the TPS or patient-specific QA measurements. RESULTS: An average absolute error of 4.31% was observed on the GPR computed for MC doses with 2% statistical noise. Our method was able to improve the accuracy of the gamma passing rate by up to 13%. The method was found especially efficient to correct the noise bias when the DTA criterion is low. CONCLUSIONS: We propose a method to enhance the γ-evaluation of a treatment plan when there is noise in one of the compared distributions. The method allows, in a tractable time, to detect the cases for which a correction is necessary and can improve the accuracy of the resulting passing rates.


Subject(s)
Radiotherapy Planning, Computer-Assisted/methods , Radiotherapy, Intensity-Modulated/methods , Algorithms , Humans , Image Enhancement , Models, Theoretical , Monte Carlo Method , Quality Assurance, Health Care , Radiotherapy Dosage , Reproducibility of Results , Signal-To-Noise Ratio
19.
Front Neuroinform ; 13: 67, 2019.
Article in English | MEDLINE | ID: mdl-31749693

ABSTRACT

In this paper, we describe a Bayesian deep neural network (DNN) for predicting FreeSurfer segmentations of structural MRI volumes, in minutes rather than hours. The network was trained and evaluated on a large dataset (n = 11,480), obtained by combining data from more than a hundred different sites, and also evaluated on another completely held-out dataset (n = 418). The network was trained using a novel spike-and-slab dropout-based variational inference approach. We show that, on these datasets, the proposed Bayesian DNN outperforms previously proposed methods, in terms of the similarity between the segmentation predictions and the FreeSurfer labels, and the usefulness of the estimate uncertainty of these predictions. In particular, we demonstrated that the prediction uncertainty of this network at each voxel is a good indicator of whether the network has made an error and that the uncertainty across the whole brain can predict the manual quality control ratings of a scan. The proposed Bayesian DNN method should be applicable to any new network architecture for addressing the segmentation problem.

20.
Med Phys ; 46(12): 5790-5798, 2019 Dec.
Article in English | MEDLINE | ID: mdl-31600829

ABSTRACT

PURPOSE: Monte Carlo (MC) algorithms offer accurate modeling of dose calculation by simulating the transport and interactions of many particles through the patient geometry. However, given their random nature, the resulting dose distributions have statistical uncertainty (noise), which prevents making reliable clinical decisions. This issue is partly addressable using a huge number of simulated particles but is computationally expensive as it results in significantly greater computation times. Therefore, there is a trade-off between the computation time and the noise level in MC dose maps. In this work, we address the mitigation of noise inherent to MC dose distributions using dilated U-Net - an encoder-decoder-styled fully convolutional neural network, which allows fast and fully automated denoising of whole-volume dose maps. METHODS: We use mean squared error (MSE) as loss function to train the model, where training is done in 2D and 2.5D settings by considering a number of adjacent slices. Our model is trained on proton therapy MC dose distributions of different tumor sites (brain, head and neck, liver, lungs, and prostate) acquired from 35 patients. We provide the network with input MC dose distributions simulated using 1 × 10 6 particles while keeping 1 × 10 9 particles as reference. RESULTS: After training, our model successfully denoises new MC dose maps. On average (averaged over five patients with different tumor sites), our model recovers D 95 of 55.99 Gy from the noisy MC input of 49.51 Gy, whereas the low noise MC (reference) offers 56.03 Gy. We observed a significant reduction in average RMSE (thresholded >10% max ref) for reference vs denoised (1.25 Gy) than reference vs input (16.96 Gy) leading to an improvement in signal-to-noise ratio (ISNR) by 18.06 dB. Moreover, the inference time of our model for a dose distribution is less than 10 s vs 100 min (MC simulation using 1 × 10 9 particles). CONCLUSIONS: We propose an end-to-end fully convolutional network that can denoise Monte Carlo dose distributions. The networks provide comparable qualitative and quantitative results as the MC dose distribution simulated with 1 × 10 9 particles, offering a significant reduction in computation time.


Subject(s)
Monte Carlo Method , Radiation Dosage , Radiotherapy Planning, Computer-Assisted/methods , Signal-To-Noise Ratio , Image Processing, Computer-Assisted , Neural Networks, Computer , Uncertainty
SELECTION OF CITATIONS
SEARCH DETAIL
...