RESUMEN
Objective.Gradient-based optimization using algorithmic derivatives can be a useful technique to improve engineering designs with respect to a computer-implemented objective function. Likewise, uncertainty quantification through computer simulations can be carried out by means of derivatives of the computer simulation. However, the effectiveness of these techniques depends on how 'well-linearizable' the software is. In this study, we assess how promising derivative information of a typical proton computed tomography (pCT) scan computer simulation is for the aforementioned applications.Approach.This study is mainly based on numerical experiments, in which we repeatedly evaluate three representative computational steps with perturbed input values. We support our observations with a review of the algorithmic steps and arithmetic operations performed by the software, using debugging techniques.Main results.The model-based iterative reconstruction (MBIR) subprocedure (at the end of the software pipeline) and the Monte Carlo (MC) simulation (at the beginning) were piecewise differentiable. However, the observed high density and magnitude of jumps was likely to preclude most meaningful uses of the derivatives. Jumps in the MBIR function arose from the discrete computation of the set of voxels intersected by a proton path, and could be reduced in magnitude by a 'fuzzy voxels' approach. The investigated jumps in the MC function arose from local changes in the control flow that affected the amount of consumed random numbers. The tracking algorithm solves an inherently non-differentiable problem.Significance.Besides the technical challenges of merely applying AD to existing software projects, the MC and MBIR codes must be adapted to compute smoother functions. For the MBIR code, we presented one possible approach for this while for the MC code, this will be subject to further research. For the tracking subprocedure, further research on surrogate models is necessary.
Asunto(s)
Protones , Tomografía Computarizada por Rayos X , Simulación por Computador , Fantasmas de Imagen , Tomografía Computarizada por Rayos X/métodos , Programas Informáticos , Algoritmos , Método de MontecarloRESUMEN
Objective.Proton therapy is highly sensitive to range uncertainties due to the nature of the dose deposition of charged particles. To ensure treatment quality, range verification methods can be used to verify that the individual spots in a pencil beam scanning treatment fraction match the treatment plan. This study introduces a novel metric for proton therapy quality control based on uncertainties in range verification of individual spots.Approach.We employ uncertainty-aware deep neural networks to predict the Bragg peak depth in an anthropomorphic phantom based on secondary charged particle detection in a silicon pixel telescope designed for proton computed tomography. The subsequently predicted Bragg peak positions, along with their uncertainties, are compared to the treatment plan, rejecting spots which are predicted to be outside the 95% confidence interval. The such-produced spot rejection rate presents a metric for the quality of the treatment fraction.Main results.The introduced spot rejection rate metric is shown to be well-defined for range predictors with well-calibrated uncertainties. Using this method, treatment errors in the form of lateral shifts can be detected down to 1 mm after around 1400 treated spots with spot intensities of 1 × 107protons. The range verification model used in this metric predicts the Bragg peak depth to a mean absolute error of 1.107 ± 0.015 mm.Significance.Uncertainty-aware machine learning has potential applications in proton therapy quality control. This work presents the foundation for future developments in this area.
Asunto(s)
Terapia de Protones , Incertidumbre , Protones , Aprendizaje Automático , Redes Neurales de la ComputaciónRESUMEN
BACKGROUND: Proton computed tomography (pCT) and radiography (pRad) are proposed modalities for improved treatment plan accuracy and in situ treatment validation in proton therapy. The pCT system of the Bergen pCT collaboration is able to handle very high particle intensities by means of track reconstruction. However, incorrectly reconstructed and secondary tracks degrade the image quality. We have investigated whether a convolutional neural network (CNN)-based filter is able to improve the image quality. MATERIAL AND METHODS: The CNN was trained by simulation and reconstruction of tens of millions of proton and helium tracks. The CNN filter was then compared to simple energy loss threshold methods using the Area Under the Receiver Operating Characteristics curve (AUROC), and by comparing the image quality and Water Equivalent Path Length (WEPL) error of proton and helium radiographs filtered with the same methods. RESULTS: The CNN method led to a considerable improvement of the AUROC, from 74.3% to 97.5% with protons and from 94.2% to 99.5% with helium. The CNN filtering reduced the WEPL error in the helium radiograph from 1.03 mm to 0.93 mm while no improvement was seen in the CNN filtered pRads. CONCLUSION: The CNN improved the filtering of proton and helium tracks. Only in the helium radiograph did this lead to improved image quality.
Asunto(s)
Telescopios , Humanos , Procesamiento de Imagen Asistido por Computador , Método de Montecarlo , Redes Neurales de la Computación , Fantasmas de Imagen , RadiografíaRESUMEN
Radiation therapy using protons and heavier ions is a fast-growing therapeutic option for cancer patients. A clinical system for particle imaging in particle therapy would enable online patient position verification, estimation of the dose deposition through range monitoring and a reduction of uncertainties in the calculation of the relative stopping power of the patient. Several prototype imaging modalities offer radiography and computed tomography using protons and heavy ions. A Digital Tracking Calorimeter (DTC), currently under development, has been proposed as one such detector. In the DTC 43 longitudinal layers of laterally stacked ALPIDE CMOS monolithic active pixel sensor chips are able to reconstruct a large number of simultaneously recorded proton tracks. In this study, we explored the capability of the DTC for helium imaging which offers favorable spatial resolution over proton imaging. Helium ions exhibit a larger cross section for inelastic nuclear interactions, increasing the number of produced secondaries in the imaged object and in the detector itself. To that end, a filtering process able to remove a large fraction of the secondaries was identified, and the track reconstruction process was adapted for helium ions. By filtering on the energy loss along the tracks, on the incoming angle and on the particle ranges, 97.5% of the secondaries were removed. After passing through 16 cm water, 50.0% of the primary helium ions survived; after the proposed filtering 42.4% of the primaries remained; finally after subsequent image reconstruction 31% of the primaries remained. Helium track reconstruction leads to more track matching errors compared to protons due to the increased available focus strength of the helium beam. In a head phantom radiograph, the Water Equivalent Path Length error envelope was 1.0 mm for helium and 1.1 mm for protons. This accuracy is expected to be sufficient for helium imaging for pre-treatment verification purposes.