RESUMO
We have previously developed a GPU-based Monte Carlo (MC) dose engine on the OpenCL platform, named goMC, with a built-in analytical linear accelerator (linac) beam model. In this paper, we report our recent improvement on goMC to move it toward clinical use. First, we have adapted a previously developed automatic beam commissioning approach to our beam model. The commissioning was conducted through an optimization process, minimizing the discrepancies between calculated dose and measurement. We successfully commissioned six beam models built for Varian TrueBeam linac photon beams, including four beams of different energies (6 MV, 10 MV, 15 MV, and 18 MV) and two flattening-filter-free (FFF) beams of 6 MV and 10 MV. Second, to facilitate the use of goMC for treatment plan dose calculations, we have developed an efficient source particle sampling strategy. It uses the pre-generated fluence maps (FMs) to bias the sampling of the control point for source particles already sampled from our beam model. It could effectively reduce the number of source particles required to reach a statistical uncertainty level in the calculated dose, as compared to the conventional FM weighting method. For a head-and-neck patient treated with volumetric modulated arc therapy (VMAT), a reduction factor of ~2.8 was achieved, accelerating dose calculation from 150.9 s to 51.5 s. The overall accuracy of goMC was investigated on a VMAT prostate patient case treated with 10 MV FFF beam. 3D gamma index test was conducted to evaluate the discrepancy between our calculated dose and the dose calculated in Varian Eclipse treatment planning system. The passing rate was 99.82% for 2%/2 mm criterion and 95.71% for 1%/1 mm criterion. Our studies have demonstrated the effectiveness and feasibility of our auto-commissioning approach and new source sampling strategy for fast and accurate MC dose calculations for treatment plans.
Assuntos
Neoplasias de Cabeça e Pescoço/radioterapia , Modelos Teóricos , Método de Monte Carlo , Planejamento de Assistência ao Paciente , Neoplasias da Próstata/radioterapia , Planejamento da Radioterapia Assistida por Computador/métodos , Radioterapia de Intensidade Modulada/instrumentação , Simulação por Computador , Humanos , Masculino , Aceleradores de Partículas/instrumentação , Dosagem Radioterapêutica , Radioterapia de Intensidade Modulada/métodosRESUMO
BACKGROUND: In regularized iterative reconstruction algorithms, the selection of regularization parameter depends on the noise level of cone beam projection data. OBJECTIVE: Our aim is to propose an algorithm to estimate the noise level of cone beam projection data. METHODS: We first derived the data correlation of cone beam projection data in the Fourier domain, based on which, the signal and the noise were decoupled. Then the noise was extracted and averaged for estimation. An adaptive regularization parameter selection strategy was introduced based on the estimated noise level. Simulation and real data studies were conducted for performance validation. RESULTS: There exists an approximately zero-energy double-wedge area in the 3D Fourier domain of cone beam projection data. As for the noise level estimation results, the averaged relative errors of the proposed algorithm in the analytical/MC/spotlight-mode simulation experiments were 0.8%, 0.14% and 0.24%, respectively, and outperformed the homogeneous area based as well as the transformation based algorithms. Real studies indicated that the estimated noise levels were inversely proportional to the exposure levels, i.e., the slopes in the log-log plot were -1.0197 and -1.049 with respect to the short-scan and half-fan modes. The introduced regularization parameter selection strategy could deliver promising reconstructed image qualities. CONCLUSIONS: Based on the data correlation of cone beam projection data in Fourier domain, the proposed algorithm could estimate the noise level of cone beam projection data accurately and robustly. The estimated noise level could be used to adaptively select the regularization parameter.
Assuntos
Tomografia Computadorizada de Feixe Cônico/métodos , Processamento de Imagem Assistida por Computador/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Algoritmos , Humanos , Imagens de Fantasmas , Espalhamento de RadiaçãoRESUMO
The aim of this study is to compare the recent Eclipse Acuros XB (AXB) dose calculation engine with the Pinnacle collapsed cone convolution/superposition (CCC) dose calculation algorithm and the Eclipse anisotropic analytic algorithm (AAA) for stereotactic ablative radiotherapy (SAbR) treatment planning of thoracic spinal (T-spine) metastases using IMRT and VMAT delivery techniques. The three commissioned dose engines (CCC, AAA, and AXB) were validated with ion chamber and EBT2 film measurements utilizing a heterogeneous slab-geometry water phantom and an anthropomorphic phantom. Step-and-shoot IMRT and VMAT treatment plans were developed and optimized for eight patients in Pinnacle, following our institutional SAbR protocol for spinal metastases. The CCC algorithm, with heterogeneity corrections, was used for dose calculations. These plans were then exported to Eclipse and recalculated using the AAA and AXB dose calculation algorithms. Various dosimetric parameters calculated with CCC and AAA were compared to that of the AXB calculations. In regions receiving above 50% of prescription dose, the calculated CCC mean dose is 3.1%-4.1% higher than that of AXB calculations for IMRT plans and 2.8%-3.5% higher for VMAT plans, while the calculated AAA mean dose is 1.5%-2.4% lower for IMRT and 1.2%-1.6% lower for VMAT. Statistically significant differences (p < 0.05) were observed for most GTV and PTV indices between the CCC and AXB calculations for IMRT and VMAT, while differences between the AAA and AXB calculations were not statistically significant. For T-spine SAbR treatment planning, the CCC calculations give a statistically significant overestimation of target dose compared to AXB. AAA underestimates target dose with no statistical significance compared to AXB. Further study is needed to determine the clinical impact of these findings.
Assuntos
Algoritmos , Anisotropia , Imagens de Fantasmas , Radiocirurgia/métodos , Neoplasias da Coluna Vertebral/cirurgia , Neoplasias Torácicas/cirurgia , Simulação por Computador , Humanos , Radiometria/métodos , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador/métodos , Radioterapia de Intensidade Modulada/métodos , Neoplasias da Coluna Vertebral/secundário , Neoplasias Torácicas/patologiaRESUMO
This article examines the critical role of fast Monte Carlo (MC) dose calculations in advancing proton therapy techniques, particularly in the context of increasing treatment customization and precision. As adaptive radiotherapy and other patient-specific approaches evolve, the need for accurate and precise dose calculations, essential for techniques like proton-based stereotactic radiosurgery, becomes more prominent. These calculations, however, are time-intensive, with the treatment planning/optimization process constrained by the achievable speed of dose computations. Thus, enhancing the speed of MC methods is vital, as it not only facilitates the implementation of novel treatment modalities but also leads to more optimal treatment plans. Today, the state-of-the-art in MC dose calculation speeds is 106-107protons per second. This review highlights the latest advancements in fast MC dose calculations that have led to such speeds, including emerging artificial intelligence-based techniques, and discusses their application in both current and emerging proton therapy strategies.
Assuntos
Método de Monte Carlo , Terapia com Prótons , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador , Terapia com Prótons/métodos , Humanos , Planejamento da Radioterapia Assistida por Computador/métodos , Doses de Radiação , Fatores de TempoRESUMO
In August 2022, the Cancer Informatics for Cancer Centers brought together cancer informatics leaders for its biannual symposium, Precision Medicine Applications in Radiation Oncology, co-chaired by Quynh-Thu Le, MD (Stanford University), and Walter J. Curran, MD (GenesisCare). Over the course of 3 days, presenters discussed a range of topics relevant to radiation oncology and the cancer informatics community more broadly, including biomarker development, decision support algorithms, novel imaging tools, theranostics, and artificial intelligence (AI) for the radiotherapy workflow. Since the symposium, there has been an impressive shift in the promise and potential for integration of AI in clinical care, accelerated in large part by major advances in generative AI. AI is now poised more than ever to revolutionize cancer care. Radiation oncology is a field that uses and generates a large amount of digital data and is therefore likely to be one of the first fields to be transformed by AI. As experts in the collection, management, and analysis of these data, the informatics community will take a leading role in ensuring that radiation oncology is prepared to take full advantage of these technological advances. In this report, we provide highlights from the symposium, which took place in Santa Barbara, California, from August 29 to 31, 2022. We discuss lessons learned from the symposium for data acquisition, management, representation, and sharing, and put these themes into context to prepare radiation oncology for the successful and safe integration of AI and informatics technologies.
Assuntos
Neoplasias , Radioterapia (Especialidade) , Humanos , Inteligência Artificial , Informática , Neoplasias/diagnóstico , Neoplasias/radioterapiaRESUMO
PURPOSE: Simulation of x-ray projection images plays an important role in cone beam CT (CBCT) related research projects, such as the design of reconstruction algorithms or scanners. A projection image contains primary signal, scatter signal, and noise. It is computationally demanding to perform accurate and realistic computations for all of these components. In this work, the authors develop a package on graphics processing unit (GPU), called gDRR, for the accurate and efficient computations of x-ray projection images in CBCT under clinically realistic conditions. METHODS: The primary signal is computed by a trilinear ray-tracing algorithm. A Monte Carlo (MC) simulation is then performed, yielding the primary signal and the scatter signal, both with noise. A denoising process specifically designed for Poisson noise removal is applied to obtain a smooth scatter signal. The noise component is then obtained by combining the difference between the MC primary and the ray-tracing primary signals, and the difference between the MC simulated scatter and the denoised scatter signals. Finally, a calibration step converts the calculated noise signal into a realistic one by scaling its amplitude according to a specified mAs level. The computations of gDRR include a number of realistic features, e.g., a bowtie filter, a polyenergetic spectrum, and detector response. The implementation is fine-tuned for a GPU platform to yield high computational efficiency. RESULTS: For a typical CBCT projection with a polyenergetic spectrum, the calculation time for the primary signal using the ray-tracing algorithms is 1.2-2.3 s, while the MC simulations take 28.1-95.3 s, depending on the voxel size. Computation time for all other steps is negligible. The ray-tracing primary signal matches well with the primary part of the MC simulation result. The MC simulated scatter signal using gDRR is in agreement with EGSnrc results with a relative difference of 3.8%. A noise calibration process is conducted to calibrate gDRR against a real CBCT scanner. The calculated projections are accurate and realistic, such that beam-hardening artifacts and scatter artifacts can be reproduced using the simulated projections. The noise amplitudes in the CBCT images reconstructed from the simulated projections also agree with those in the measured images at corresponding mAs levels. CONCLUSIONS: A GPU computational tool, gDRR, has been developed for the accurate and efficient simulations of x-ray projections of CBCT with realistic configurations.
Assuntos
Algoritmos , Gráficos por Computador/instrumentação , Tomografia Computadorizada de Feixe Cônico/instrumentação , Tomografia Computadorizada de Feixe Cônico/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Processamento de Sinais Assistido por Computador/instrumentação , Software , Simulação por Computador , Desenho de Equipamento , Luz , Modelos Biológicos , Intensificação de Imagem Radiográfica/métodos , Reprodutibilidade dos Testes , Espalhamento de Radiação , Sensibilidade e EspecificidadeRESUMO
PURPOSE: Four-dimensional cone beam computed tomography (4D-CBCT) has been developed to provide respiratory phase-resolved volumetric imaging in image guided radiation therapy. Conventionally, it is reconstructed by first sorting the x-ray projections into multiple respiratory phase bins according to a breathing signal extracted either from the projection images or some external surrogates, and then reconstructing a 3D CBCT image in each phase bin independently using FDK algorithm. This method requires adequate number of projections for each phase, which can be achieved using a low gantry rotation or multiple gantry rotations. Inadequate number of projections in each phase bin results in low quality 4D-CBCT images with obvious streaking artifacts. 4D-CBCT images at different breathing phases share a lot of redundant information, because they represent the same anatomy captured at slightly different temporal points. Taking this redundancy along the temporal dimension into account can in principle facilitate the reconstruction in the situation of inadequate number of projection images. In this work, the authors propose two novel 4D-CBCT algorithms: an iterative reconstruction algorithm and an enhancement algorithm, utilizing a temporal nonlocal means (TNLM) method. METHODS: The authors define a TNLM energy term for a given set of 4D-CBCT images. Minimization of this term favors those 4D-CBCT images such that any anatomical features at one spatial point at one phase can be found in a nearby spatial point at neighboring phases. 4D-CBCT reconstruction is achieved by minimizing a total energy containing a data fidelity term and the TNLM energy term. As for the image enhancement, 4D-CBCT images generated by the FDK algorithm are enhanced by minimizing the TNLM function while keeping the enhanced images close to the FDK results. A forward-backward splitting algorithm and a Gauss-Jacobi iteration method are employed to solve the problems. The algorithms implementation on GPU is designed to avoid redundant and uncoalesced memory access, in order to ensure a high computational efficiency. Our algorithms have been tested on a digital NURBS-based cardiac-torso phantom and a clinical patient case. RESULTS: The reconstruction algorithm and the enhancement algorithm generate visually similar 4D-CBCT images, both better than the FDK results. Quantitative evaluations indicate that, compared with the FDK results, our reconstruction method improves contrast-to-noise-ratio (CNR) by a factor of 2.56-3.13 and our enhancement method increases the CNR by 2.75-3.33 times. The enhancement method also removes over 80% of the streak artifacts from the FDK results. The total computation time is 509-683 s for the reconstruction algorithm and 524-540 s for the enhancement algorithm on an NVIDIA Tesla C1060 GPU card. CONCLUSIONS: By innovatively taking the temporal redundancy among 4D-CBCT images into consideration, the proposed algorithms can produce high quality 4D-CBCT images with much less streak artifacts than the FDK results, in the situation of inadequate number of projections.
Assuntos
Tomografia Computadorizada de Feixe Cônico/métodos , Tomografia Computadorizada Quadridimensional/métodos , Algoritmos , Artefatos , Humanos , Imagens de Fantasmas , Fatores de TempoRESUMO
PURPOSE: Understanding motion characteristics of liver such as, interfractional and intrafractional motion variability, difference in motion within different locations in the organ, and their complex relationship with the breathing cycles are particularly important for image-guided liver SBRT. The purpose of this study was to investigate such motion characteristics based on fiducial markers tracked with the x-ray projections of the CBCT scans, taken immediately prior to the treatments. METHODS: Twenty liver SBRT patients were analyzed. Each patient had three fiducial markers (2 × 5-mm gold) percutaneously implanted around the gross tumor. The prescription ranged from 2 to 8 fractions per patient. The CBCT projections data for each fraction (â¼650 projections∕scan), for each patient, were analyzed and the 2D positions of the markers were extracted using an in-house algorithm. In total, >55 000 x-ray projections were analyzed from 85 CBCT scans. From the 2D extracted positions, a 3D motion trajectory of the markers was constructed, from each CBCT scans, resulting in left-right (LR), anterior-posterior (AP), and cranio-caudal (CC) location information of the markers with >55 000 data points. The authors then analyzed the interfraction and intrafraction liver motion variability, within different locations in the organ, and as a function of the breathing cycle. The authors also compared the motion characteristics against the planning 4DCT and the RPM™ (Varian Medical Systems, Palo Alto, CA) breathing traces. Variations in the appropriate gating window (defined as the percent of the maximum range at which 50% of the marker positions are contained), between fractions were calculated as well. RESULTS: The range of motion for the 20 patients were 3.0 ± 2.0 mm, 5.1 ± 3.1 mm, and 17.9 ± 5.1 mm in the planning 4DCT, and 2.8 ± 1.6 mm, 5.3 ± 3.1 mm, and 16.5 ± 5.7 mm in the treatment CBCT, for LR, AP, and CC directions, respectively. The range of respiratory period was 3.9 ± 0.7 and 4.2 ± 0.8 s during the 4DCT simulation and the CBCT scans, respectively. The authors found that breathing-induced AP and CC motions are highly correlated. That is, all markers moved cranially also moved posteriorly and vice versa, irrespective of the location. The LR motion had a more variable relationship with the AP∕CC motions, and appeared random with respect to the location. That is, when the markers moved toward cranial-posterior direction, 58% of the markers moved to the patient-right, 22% of the markers moved to the patient-left, and 20% of the markers had minimal∕none motion. The absolute difference in the motion magnitude between the markers, in different locations within the liver, had a positive correlation with the absolute distance between the markers (R(2) = 0.69, linear-fit). The interfractional gating window varied significantly for some patients, with the largest having 29.4%-56.4% range between fractions. CONCLUSIONS: This study analyzed the liver motion characteristics of 20 patients undergoing SBRT. A large variation in motion was observed, interfractionally and intrafractionally, and that as the distance between the markers increased, the difference in the absolute range of motion also increased. This suggests that marker(s) in closest proximity to the target be used.
Assuntos
Tomografia Computadorizada de Feixe Cônico/métodos , Fígado/fisiopatologia , Movimento , Radiocirurgia/métodos , Cirurgia Assistida por Computador/métodos , Algoritmos , Carcinoma Hepatocelular/diagnóstico por imagem , Carcinoma Hepatocelular/fisiopatologia , Carcinoma Hepatocelular/cirurgia , Tomografia Computadorizada de Feixe Cônico/normas , Marcadores Fiduciais , Humanos , Fígado/diagnóstico por imagem , Fígado/cirurgia , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/fisiopatologia , Neoplasias Hepáticas/cirurgia , Radiocirurgia/normas , Cirurgia Assistida por Computador/normasRESUMO
Total body irradiation is an important part of the conditioning regimens frequently used to prepare patients for allogeneic hematopoietic stem cell transplantation (SCT). Volumetric-modulated arc therapy enabled total body irradiation (VMAT-TBI), an alternative to conventional TBI (cTBI), is a novel radiotherapy treatment technique that has been implemented and investigated in our institution. The purpose of this study is to (1) report our six-year clinical experience in terms of treatment planning strategy and delivery time and (2) evaluate the clinical outcomes and toxicities in our cohort of patients treated with VMAT-TBI. This is a retrospective single center study. Forty-four patients at our institution received VMAT-TBI and chemotherapy conditioning followed by allogeneic SCT between 2014 and 2020. Thirty-two patients (73%) received standard-dose TBI (12-13.2 Gy in 6-8 fractions twice daily), whereas 12 (27%) received low-dose TBI (2-4 Gy in one fraction). Treatment planning, delivery, and treatment outcome data including overall survival (OS), relapse-free survival (RFS), and toxicities were analyzed. The developed VMAT-TBI planning strategy consistently generated plans satisfying our dose constraints, with planning target volume coverage >90%, mean lung dose â¼50% to 75% of prescription dose, and minimal hotspots in critical organs. Most of the treatment deliveries were <100 minutes (range 33-147, mean 72). The median follow-up was 26 months. At the last follow-up, 34 of 44 (77%) of patients were alive, with 1- and 2-year OS of 90% and 79% and RFS of 88% and 71%, respectively. The most common grade 3+ toxicities observed were mucositis (31 patients [71%]) and nephrotoxicity (6 patients [13%]), both of which were deemed multifactorial in cause. Four patients (9%) in standard-dose cohort developed grade 3+ pneumonitis, with 3 cases in the setting of documented respiratory infection and only 1 (2%) deemed likely related to radiation alone. VMAT-TBI provides a safe alternative to cTBI. The dose modulation capability of VMAT-TBI may lead to new treatment strategies, such as simultaneous boost and further critical organ sparing, for better malignant cell eradication, immune suppression, and lower toxicities.
Assuntos
Radioterapia de Intensidade Modulada , Humanos , Órgãos em Risco/efeitos da radiação , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador/métodos , Radioterapia de Intensidade Modulada/efeitos adversos , Estudos Retrospectivos , Resultado do Tratamento , Irradiação Corporal TotalRESUMO
PURPOSE: Four-dimensional computed tomography (4DCT) has been widely used in cancer radiotherapy for accurate target delineation and motion measurement for tumors in the thorax and upper abdomen areas. However, its prolonged scanning duration causes a considerable increase of radiation dose compared to conventional CT, which is a major concern in its clinical application. This work is to develop a new algorithm to reconstruct 4DCT images from undersampled projections acquired at low mA s levels in order to reduce the imaging dose. METHODS: Conventionally, each phase of 4DCT is reconstructed independently using the filtered backprojection (FBP) algorithm. The basic idea of the authors' new algorithm is that by utilizing the common information among different phases, the input information required to reconstruct the image of high quality, and thus the imaging dose, can be reduced. The authors proposed a temporal nonlocal means (TNLM) method to explore the interphase similarity. All phases of the 4DCT images are reconstructed simultaneously by minimizing a cost function consisting of a data fidelity term and a TNLM regularization term. The authors utilized a modified forward-backward splitting algorithm and a Gauss-Jacobi iteration method to efficiently solve the minimization problem. The algorithm was also implemented on a graphics processing unit (GPU) to improve the computational speed. The authors' reconstruction algorithm has been tested on a digital NCAT thorax phantom in three low dose scenarios: All projections with low mA s level, undersampled projections with high mA s level, and undersampled projections with low mA s level. RESULTS: In all three low dose scenarios, the new algorithm generates visually much better CT images containing less image noise and streaking artifacts compared to the standard FBP algorithm. Quantitative analysis shows that by comparing the authors' TNLM algorithm to the standard FBP algorithm, the contrast-to-noise ratio has been improved by a factor of 3.9-10.2 and the signal-to-noise ratio has been improved by a factor of 2.1-5.9, depending on the cases. In the situation of undersampled projection data, the majority of the streaks in the images reconstructed by FBP can be suppressed using the authors' algorithm. The total reconstruction time for all ten phases of a slice ranges from 40 to 90 s on an NVIDIA Tesla C1060 GPU card. CONCLUSIONS: The experimental results indicate that the authors' new algorithm outperforms the conventional FBP algorithm in effectively reducing the image artifacts due to undersampling and suppressing the image noise due to the low mA s level.
Assuntos
Tomografia Computadorizada Quadridimensional/métodos , Processamento de Imagem Assistida por Computador/métodos , Doses de Radiação , Algoritmos , Imagens de Fantasmas , Radiografia Torácica , Fatores de TempoRESUMO
PURPOSE: To evaluate an algorithm for real-time 3D tumor localization from a single x-ray projection image for lung cancer radiotherapy. METHODS: Recently, we have developed an algorithm for reconstructing volumetric images and extracting 3D tumor motion information from a single x-ray projection [Li et al., Med. Phys. 37, 2822-2826 (2010)]. We have demonstrated its feasibility using a digital respiratory phantom with regular breathing patterns. In this work, we present a detailed description and a comprehensive evaluation of the improved algorithm. The algorithm was improved by incorporating respiratory motion prediction. The accuracy and efficiency of using this algorithm for 3D tumor localization were then evaluated on (1) a digital respiratory phantom, (2) a physical respiratory phantom, and (3) five lung cancer patients. These evaluation cases include both regular and irregular breathing patterns that are different from the training dataset. RESULTS: For the digital respiratory phantom with regular and irregular breathing, the average 3D tumor localization error is less than 1 mm which does not seem to be affected by amplitude change, period change, or baseline shift. On an NVIDIA Tesla C1060 graphic processing unit (GPU) card, the average computation time for 3D tumor localization from each projection ranges between 0.19 and 0.26 s, for both regular and irregular breathing, which is about a 10% improvement over previously reported results. For the physical respiratory phantom, an average tumor localization error below 1 mm was achieved with an average computation time of 0.13 and 0.16 s on the same graphic processing unit (GPU) card, for regular and irregular breathing, respectively. For the five lung cancer patients, the average tumor localization error is below 2 mm in both the axial and tangential directions. The average computation time on the same GPU card ranges between 0.26 and 0.34 s. CONCLUSIONS: Through a comprehensive evaluation of our algorithm, we have established its accuracy in 3D tumor localization to be on the order of 1 mm on average and 2 mm at 95 percentile for both digital and physical phantoms, and within 2 mm on average and 4 mm at 95 percentile for lung cancer patients. The results also indicate that the accuracy is not affected by the breathing pattern, be it regular or irregular. High computational efficiency can be achieved on GPU, requiring 0.1-0.3 s for each x-ray projection.
Assuntos
Algoritmos , Imageamento Tridimensional/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/radioterapia , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radioterapia Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Sistemas Computacionais , Humanos , Imagens de Fantasmas , Intensificação de Imagem Radiográfica/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Tomografia Computadorizada por Raios X/instrumentaçãoRESUMO
X-ray imaging dose from serial Cone-beam CT (CBCT) scans raises a clinical concern in most image guided radiation therapy procedures. The goal of this paper is to develop a fast GPU-based algorithm to reconstruct high quality CBCT images from undersampled and noisy projection data so as to lower the imaging dose. The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. We develop a GPU-friendly version of a forward-backward splitting algorithm to solve this problem. A multi-grid technique is also employed. We test our CBCT reconstruction algorithm on a digital phantom and a head-and-neck patient case. The performance under low mAs is also validated using physical phantoms. It is found that 40 x-ray projections are sufficient to reconstruct CBCT images with satisfactory quality for clinical purposes. Phantom experiments indicate that CBCT images can be successfully reconstructed under 0.1 mAs/projection. Comparing with the widely used head-and-neck scanning protocol of about 360 projections with 0.4 mAs/projection, an overall 36 times dose reduction has been achieved. The reconstruction time is about 130 sec on an NVIDIA Tesla C1060 GPU card, which is estimated â¼ 100 times faster than similar regularized iterative reconstruction approaches.
Assuntos
Algoritmos , Gráficos por Computador , Tomografia Computadorizada de Feixe Cônico/métodos , Planejamento da Radioterapia Assistida por Computador/métodos , Artefatos , Simulação por Computador , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Neoplasias de Cabeça e Pescoço/radioterapia , Humanos , Modelos Estatísticos , Imagens de Fantasmas , Doses de Radiação , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Dosagem RadioterapêuticaRESUMO
Automatic sigmoid colon segmentation in CT for radiotherapy treatment planning is challenging due to complex organ shape, close distances to other organs, and large variations in size, shape, and filling status. The patient bowel is often not evacuated, and CT contrast enhancement is not used, which further increase problem difficulty. Deep learning (DL) has demonstrated its power in many segmentation problems. However, standard 2-D approaches cannot handle the sigmoid segmentation problem due to incomplete geometry information and 3-D approaches often encounters the challenge of a limited training data size. Motivated by human's behavior that segments the sigmoid slice by slice while considering connectivity between adjacent slices, we proposed an iterative 2.5-D DL approach to solve this problem. We constructed a network that took an axial CT slice, the sigmoid mask in this slice, and an adjacent CT slice to segment as input and output the predicted mask on the adjacent slice. We also considered other organ masks as prior information. We trained the iterative network with 50 patient cases using five-fold cross validation. The trained network was repeatedly applied to generate masks slice by slice. The method achieved average Dice similarity coefficients of 0.82 0.06 and 0.88 0.02 in 10 test cases without and with using prior information.
Assuntos
Aprendizado Profundo , Colo Sigmoide/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios XRESUMO
PURPOSE: To develop a novel aperture-based algorithm for volumetric modulated are therapy (VMAT) treatment plan optimization with high quality and high efficiency. METHODS: The VMAT optimization problem is formulated as a large-scale convex programming problem solved by a column generation approach. The authors consider a cost function consisting two terms, the first enforcing a desired dose distribution and the second guaranteeing a smooth dose rate variation between successive gantry angles. A gantry rotation is discretized into 180 beam angles and for each beam angle, only one MLC aperture is allowed. The apertures are generated one by one in a sequential way. At each iteration of the column generation method, a deliverable MLC aperture is generated for one of the unoccupied beam angles by solving a subproblem with the consideration of MLC mechanic constraints. A subsequent master problem is then solved to determine the dose rate at all currently generated apertures by minimizing the cost function. When all 180 beam angles are occupied, the optimization completes, yielding a set of deliverable apertures and associated dose rates that produce a high quality plan. RESULTS: The algorithm was preliminarily tested on five prostate and five head-and-neck clinical cases, each with one full gantry rotation without any couch/collimator rotations. High quality VMAT plans have been generated for all ten cases with extremely high efficiency. It takes only 5-8 min on CPU (MATLAB code on an Intel Xeon 2.27 GHz CPU) and 18-31 s on GPU (CUDA code on an NVIDIA Tesla C1060 GPU card) to generate such plans. CONCLUSIONS: The authors have developed an aperture-based VMAT optimization algorithm which can generate clinically deliverable high quality treatment plans at very high efficiency.
Assuntos
Neoplasias de Cabeça e Pescoço/radioterapia , Neoplasias da Próstata/radioterapia , Planejamento da Radioterapia Assistida por Computador/métodos , Algoritmos , Gráficos por Computador , Simulação por Computador , Computadores , Relação Dose-Resposta à Radiação , Humanos , Masculino , Modelos Estatísticos , Dosagem Radioterapêutica , Reprodutibilidade dos Testes , Fatores de TempoRESUMO
PURPOSE: Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. METHODS: The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. RESULTS: It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of approximately 360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. CONCLUSIONS: This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.
Assuntos
Tomografia Computadorizada de Feixe Cônico/métodos , Radioterapia/métodos , Algoritmos , Artefatos , Criança , Gráficos por Computador , Simulação por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Modelos Estatísticos , Imagens de Fantasmas , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Planejamento da Radioterapia Assistida por Computador/métodos , Raios XRESUMO
PURPOSE: To develop an algorithm for real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. METHODS: Given a set of volumetric images of a patient at N breathing phases as the training data, deformable image registration was performed between a reference phase and the other N-1 phases, resulting in N-1 deformation vector fields (DVFs). These DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from principal component analysis (PCA). By varying the PCA coefficients, new DVFs can be generated, which, when applied on the reference image, lead to new volumetric images. A volumetric image can then be reconstructed from a single projection image by optimizing the PCA coefficients such that its computed projection matches the measured one. The 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image. The algorithm was implemented on graphics processing units (GPUs) to achieve real-time efficiency. The training data were generated using a realistic and dynamic mathematical phantom with ten breathing phases. The testing data were 360 cone beam projections corresponding to one gantry rotation, simulated using the same phantom with a 50% increase in breathing amplitude. RESULTS: The average relative image intensity error of the reconstructed volumetric images is 6.9% +/- 2.4%. The average 3D tumor localization error is 0.8 +/- 0.5 mm. On an NVIDIA Tesla C1060 GPU card, the average computation time for reconstructing a volumetric image from each projection is 0.24 s (range: 0.17 and 0.35 s). CONCLUSIONS: The authors have shown the feasibility of reconstructing volumetric images and localizing tumor positions in 3D in near real-time from a single x-ray image.
Assuntos
Algoritmos , Tomografia Computadorizada de Feixe Cônico/métodos , Imageamento Tridimensional/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/radioterapia , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radioterapia Assistida por Computador/métodos , Sistemas Computacionais , Humanos , Intensificação de Imagem Radiográfica/métodos , Reprodutibilidade dos Testes , Sensibilidade e EspecificidadeRESUMO
PURPOSE: Four-dimensional computed tomography (4DCT) has enhanced images of the thorax and upper abdomen during respiration, but intraphase residual motion artifacts will persist in cine-mode scanning. In this study, the source and magnitude of projection artifacts due to intraphase target motion is investigated. METHODS: A theoretical model of geometric uncertainty due to partial projection artifacts in cine-mode 4DCT was derived based on ideal periodic motion. Predicted artifacts were compared to measured errors with a rigid lung phantom attached to a programmable motion platform. Ideal periodic motion and actual patient breathing patterns were used as input for phantom motion. Reconstructed target dimensions were measured along the direction of motion and compared to the actual, known dimensions. RESULTS: Artifacts due to intraphase residual motion in cine-mode 4DCT range from a few mm up to a few cm on a given scanner, and can be predicted based on target motion and CT gantry rotation time. Errors in ITV and GTV dimensions were accurately characterized by the theoretical uncertainty at all phases when sinusoidal motion was considered, and in 96% of 300 measurements when patient breathing patterns were used as motion input. When peak-to-peak motion of 1.5 cm is combined with a breathing period of 4 s and gantry rotation time of 1 s, errors due to partial projection artifacts can be greater than 1 cm near midventilation and are a few mm in the inhale and exhale phases. Incorporation of such uncertainty into margin design should be considered in addition to other uncertainties. CONCLUSIONS: Artifacts due to intraphase residual motion exist in 4DCT, even for ideal breathing motions (e.g., sine waves). It was determined that these motion artifacts depend on patient-specific tumor motion and CT gantry rotation speed. Thus, if the patient-specific motion parameters are known (i.e., amplitude and period), a patient-specific margin can and should be designed to compensate for this uncertainty.
Assuntos
Algoritmos , Artefatos , Imageamento Tridimensional/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Técnicas de Imagem de Sincronização Respiratória/métodos , Tomografia Computadorizada por Raios X/métodos , Humanos , Aumento da Imagem/métodos , Movimento (Física) , Reprodutibilidade dos Testes , Sensibilidade e EspecificidadeRESUMO
As one of the most popular approaches in artificial intelligence, deep learning (DL) has attracted a lot of attention in the medical physics field over the past few years. The goals of this topical review article are twofold. First, we will provide an overview of the method to medical physics researchers interested in DL to help them start the endeavor. Second, we will give in-depth discussions on the DL technology to make researchers aware of its potential challenges and possible solutions. As such, we divide the article into two major parts. The first part introduces general concepts and principles of DL and summarizes major research resources, such as computational tools and databases. The second part discusses challenges faced by DL, present available methods to mitigate some of these challenges, as well as our recommendations.
Assuntos
Aprendizado Profundo , Física , Diagnóstico por Imagem , HumanosRESUMO
Robustness is an important aspect when evaluating a method of medical image analysis. In this study, we investigated the robustness of a deep learning (DL)-based lung-nodule classification model for CT images with respect to noise perturbations. A deep neural network (DNN) was established to classify 3D CT images of lung nodules into malignant or benign groups. The established DNN was able to predict malignancy rate of lung nodules based on CT images, achieving the area under the curve of 0.91 for the testing dataset in a tenfold cross validation as compared to radiologists' prediction. We then evaluated its robustness against noise perturbations. We added to the input CT images noise signals generated randomly or via an optimization scheme using a realistic noise model based on a noise power spectrum for a given mAs level, and monitored the DNN's output. The results showed that the CT noise was able to affect the prediction results of the established DNN model. With random noise perturbations at 100 mAs, DNN's predictions for 11.2% of training data and 17.4% of testing data were successfully altered by at least once. The percentage increased to 23.4% and 34.3%, respectively, for optimization-based perturbations. We further evaluated robustness of models with different architectures, parameters, number of output labels, etc, and robustness concern was found in these models to different degrees. To improve model robustness, we empirically proposed an adaptive training scheme. It fine-tuned the DNN model by including perturbations in the training dataset that successfully altered the DNN's perturbations. The adaptive scheme was repeatedly performed to gradually improve DNN's robustness. The numbers of perturbations at 100 mAs affecting DNN's predictions were reduced to 10.8% for training and 21.1% for testing by the adaptive training scheme after two iterations. Our study illustrated that robustness may potentially be a concern for an exemplary DL-based lung-nodule classification model for CT images, indicating the needs for evaluating and ensuring model robustness when developing similar models. The proposed adaptive training scheme may be able to improve model robustness.
Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Humanos , Neoplasias Pulmonares/patologiaRESUMO
PURPOSE: In the treatment planning process of intensity-modulated radiation therapy (IMRT), a human planner operates the treatment planning system (TPS) to adjust treatment planning parameters, for example, dose volume histogram (DVH) constraints' locations and weights, to achieve a satisfactory plan for each patient. This process is usually time-consuming, and the plan quality depends on planer's experience and available planning time. In this study, we proposed to model the behaviors of human planners in treatment planning by a deep reinforcement learning (DRL)-based virtual treatment planner network (VTPN), such that it can operate the TPS in a human-like manner for treatment planning. METHODS AND MATERIALS: Using prostate cancer IMRT as an example, we established the VTPN using a deep neural network developed. We considered an in-house optimization engine with a weighted quadratic objective function. Virtual treatment planner network was designed to observe an intermediate plan DVHs and decide the action to improve the plan by changing weights and threshold dose in the objective function. We trained the VTPN in an end-to-end DRL process in 10 patient cases. A plan score was used to measure plan quality. We demonstrated the feasibility and effectiveness of the trained VTPN in another 64 patient cases. RESULTS: Virtual treatment planner network was trained to spontaneously learn how to adjust treatment planning parameters to generate high-quality treatment plans. In the 64 testing cases, with initialized parameters, quality score was 4.97 (±2.02), with 9.0 being the highest possible score. Using VTPN to perform treatment planning improved quality score to 8.44 (±0.48). CONCLUSIONS: To our knowledge, this was the first time that intelligent treatment planning behaviors of human planner in external beam IMRT are autonomously encoded in an artificial intelligence system. The trained VTPN is capable of behaving in a human-like way to produce high-quality plans.