Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Comput Methods Programs Biomed ; 234: 107500, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37030136

RESUMEN

BACKGROUND AND OBJECTIVES: This study provides a quantitative comparison of images created using gVirtualXray (gVXR) to both Monte Carlo (MC) and real images of clinically realistic phantoms. gVirtualXray is an open-source framework that relies on the Beer-Lambert law to simulate X-ray images in realtime on a graphics processor unit (GPU) using triangular meshes. METHODS: Images are generated with gVirtualXray and compared with a corresponding ground truth image of an anthropomorphic phantom: (i) an X-ray projection generated using a Monte Carlo simulation code, (ii) real digitally reconstructed radiographs (DRRs), (iii) computed tomography (CT) slices, and (iv) a real radiograph acquired with a clinical X-ray imaging system. When real images are involved, the simulations are used in an image registration framework so that the two images are aligned. RESULTS: The mean absolute percentage error (MAPE) between the images simulated with gVirtualXray and MC is 3.12%, the zero-mean normalised cross-correlation (ZNCC) is 99.96% and the structural similarity index (SSIM) is 0.99. The run-time is 10 days for MC and 23 ms with gVirtualXray. Images simulated using surface models segmented from a CT scan of the Lungman chest phantom were similar to (i) DRRs computed from the CT volume and (ii) an actual digital radiograph. CT slices reconstructed from images simulated with gVirtualXray were comparable to the corresponding slices of the original CT volume. CONCLUSIONS: When scattering can be ignored, accurate images that would take days using MC can be generated in milliseconds with gVirtualXray. This speed of execution enables the use of repetitive simulations with varying parameters, e.g. to generate training data for a deep-learning algorithm, and to minimise the objective function of an optimisation problem in image registration. The use of surface models enables the combination of X-ray simulation with real-time soft-tissue deformation and character animation, which can be deployed in virtual reality applications.


Asunto(s)
Benchmarking , Tomografía Computarizada por Rayos X , Rayos X , Radiografía , Simulación por Computador , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Fantasmas de Imagen , Método de Montecarlo
2.
IEEE Trans Med Imaging ; 42(10): 2853-2864, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37053055

RESUMEN

Data consistency conditions (DCC) are mathematical equations characterizing the redundancy in X-ray projections. They have been used to correct inconsistent projections before computed tomography (CT) reconstruction. This article investigates DCC for a helical acquisition with a cylindrical detector, the geometry of most diagnostic CT scanners. The acquired projections are analyzed pair-by-pair. The intersection of each plane containing the two source positions with the corresponding cone-beams defines two fan-beams for which a DCC can be computed. Instead of rebinning the two fan-beam projections to a conventional detector, we directly derive the DCC in detector coordinates. If the line defined by two source positions intersects the field-of-view (FOV), the DCC presents a singularity which is accounted for in our numerical implementation to increase the number of DCC compared to previous approaches which excluded these pairs of source positions. Axial truncation of the projections is addressed by identifying for which set of planes containing the two source positions the fan-beams are not truncated. The ability of these DCC to detect breathing motion has been evaluated on simulated and real projections. Our results indicate that the DCC can detect motion if the baseline and the FOV do not intersect. If they do, the inconsistency due to motion is dominated by discretization errors and noise. We therefore propose to normalize the inconsistency by the noise to obtain a noise-aware metric which is mostly sensitive to inconsistencies due to motion. Combined with a moving average to reduce noise, the derived DCC can detect breathing motion.


Asunto(s)
Tomografía Computarizada Espiral , Tomografía Computarizada por Rayos X , Fantasmas de Imagen , Tomografía Computarizada por Rayos X/métodos , Tomografía Computarizada de Haz Cónico/métodos , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos
4.
Phys Med Biol ; 67(23)2022 11 23.
Artículo en Inglés | MEDLINE | ID: mdl-36332267

RESUMEN

Objective.We propose a method to model families of distributions of particles exiting a phantom with a conditional generative adversarial network (condGAN) during Monte Carlo simulation of single photon emission computed tomography imaging devices.Approach.The proposed condGAN is trained on a low statistics dataset containing the energy, the time, the position and the direction of exiting particles. In addition, it also contains a vector of conditions composed of four dimensions: the initial energy and the position of emitted particles within the phantom (a total of 12 dimensions). The information related to the gammas absorbed within the phantom is also added in the dataset. At the end of the training process, one component of the condGAN, the generator (G), is obtained.Main results.Particles with specific energies and positions of emission within the phantom can then be generated withGto replace the tracking of particle within the phantom, allowing reduced computation time compared to conventional Monte Carlo simulation.Significance.The condGAN generator is trained only once for a given phantom but can generate particles from various activity source distributions.


Asunto(s)
Tomografía Computarizada de Emisión de Fotón Único , Método de Montecarlo , Tomografía Computarizada de Emisión de Fotón Único/métodos , Fantasmas de Imagen , Simulación por Computador
5.
Phys Med Biol ; 67(19)2022 10 04.
Artículo en Inglés | MEDLINE | ID: mdl-36113437

RESUMEN

Objective.Study the performance of a spectral reconstruction method for Compton imaging of polychromatic sources and compare it to standard Compton reconstruction based on the selection of photopeak events.Approach.The proposed spectral and the standard photopeak reconstruction methods are used to reconstruct images from simulated sources emitting simultaneously photons of 140, 245, 364 and 511 keV. Data are simulated with perfect and realistic energy resolutions and including Doppler broadening. We compare photopeak and spectral reconstructed images both qualitatively and quantitatively by means of activity recovery coefficient and spatial resolution.Main results.The presented method allows improving the images of polychromatic sources with respect to standard reconstruction methods. The main reasons for this improvement are the increase of available statistics and the reduction of contamination from higher initial photon energies. The reconstructed images present lower noise, higher activity recovery coefficient and better spatial resolution. The improvements become more sensible as the energy resolution of the detectors decreases.Significance.Compton cameras have been studied for their capability of imaging polychromatic sources, thus allowing simultaneous imaging of multiple radiotracers. In such scenarios, Compton images are conventionally reconstructed for each emission energy independently, selecting only those measured events depositing a total energy within a fixed window around the known emission lines. We propose to employ a spectral image reconstruction method for polychromatic sources, which allows increasing the available statistics by using the information from events with partial energy deposition. The detector energy resolution influences the energy window used to select photopeak events and therefore the level of contamination by higher energies. The spectral method is expected to have a more important impact as the detector resolution worsens. In this paper we focus on energy ranges from nuclear medical imaging and we consider realistic energy resolutions.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Diagnóstico por Imagen/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Método de Montecarlo , Fantasmas de Imagen , Fotones
6.
Phys Med Biol ; 67(16)2022 08 05.
Artículo en Inglés | MEDLINE | ID: mdl-35603758

RESUMEN

Objective.Proton computed tomography (CT) is similar to x-ray CT but relies on protons rather than photons to form an image. In its most common operation mode, the measured quantity is the amount of energy that a proton has lost while traversing the imaged object from which a relative stopping power map can be obtained via tomographic reconstruction. To this end, a calorimeter which measures the energy deposited by protons downstream of the scanned object has been studied or implemented as energy detector in several proton CT prototypes. An alternative method is to measure the proton's residual velocity and thus its kinetic energy via the time of flight (TOF) between at least two sensor planes. In this work, we study the RSP resolution, seen as image noise, which can be expected from TOF proton CT systems.Approach.We rely on physics models on the one hand and statistical models of the relevant uncertainties on the other to derive closed form expressions for the noise in projection images. The TOF measurement error scales with the distance between the TOF sensor planes and is reported as velocity error in ps/m. We use variance reconstruction to obtain noise maps of a water cylinder phantom given the scanner characteristics and additionally reconstruct noise maps for a calorimeter-based proton CT system as reference. We use Monte Carlo simulations to verify our model and to estimate the noise due to multiple Coulomb scattering inside the object. We also provide a comparison of TOF helium and proton CT.Main results.We find that TOF proton CT with 30 ps m-1velocity error reaches similar image noise as a calorimeter-based proton CT system with 1% energy error (1 sigma error). A TOF proton CT system with a 50 ps m-1velocity error produces slightly less noise than a 2% calorimeter system. Noise in a reconstructed TOF proton CT image is spatially inhomogeneous with a marked increase towards the object periphery. Our modelled noise was consistent with Monte Carlo simulated images. TOF helium CT offers lower RSP noise at equal fluence, but is less advantageous at equal imaging dose.Significance.This systematic study of image noise in TOF proton CT can serve as a guide for future developments of this alternative solution for estimating the residual energy of protons and helium ions after the scanned object.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Protones , Helio , Procesamiento de Imagen Asistido por Computador/métodos , Método de Montecarlo , Fantasmas de Imagen , Tomografía Computarizada por Rayos X/métodos
7.
Sci Rep ; 12(1): 5981, 2022 04 08.
Artículo en Inglés | MEDLINE | ID: mdl-35395858

RESUMEN

Synchrotron Radiation Computed Tomography (SRCT) allows a better detection of fatigue cracks in metals than laboratory CT due to the existence of phase contrast. However the presence in reconstructed images of fringes at the edges of objects generated by Fresnel diffraction makes it difficult to identify and analyze the cracks quantitatively. Simulations of phase contrast synchrotron tomography images containing cracks with different sizes and shapes are obtained by using GATE software. Analyzing the simulation results, firstly, we confirmed that the bright parts with strong contrast in SRCT image are streak artifacts; secondly, we found that the gray scale values within the cracks in SRCT images are related to the crack size; these simulation results are used to analyse SRCT images of internal fatigue cracks in a cast Al alloy, providing a clearer visualisation of damage.


Asunto(s)
Aleaciones , Aluminio , Humanos , Estrés Mecánico , Tomografía/métodos , Tomografía Computarizada por Rayos X/métodos
8.
Cancers (Basel) ; 14(7)2022 Mar 25.
Artículo en Inglés | MEDLINE | ID: mdl-35406438

RESUMEN

For the evaluation of the biological effects, Monte Carlo toolkits were used to provide an RBE-weighted dose using databases of survival fraction coefficients predicted through biophysical models. Biophysics models, such as the mMKM and NanOx models, have previously been developed to estimate a biological dose. Using the mMKM model, we calculated the saturation corrected dose mean specific energy z1D* (Gy) and the dose at 10% D10 for human salivary gland (HSG) cells using Monte Carlo Track Structure codes LPCHEM and Geant4-DNA, and compared these with data from the literature for monoenergetic ions. These two models were used to create databases of survival fraction coefficients for several ion types (hydrogen, carbon, helium and oxygen) and for energies ranging from 0.1 to 400 MeV/n. We calculated α values as a function of LET with the mMKM and the NanOx models, and compared these with the literature. In order to estimate the biological dose for SOBPs, these databases were used with a Monte Carlo toolkit. We considered GATE, an open-source software based on the GEANT4 Monte Carlo toolkit. We implemented a tool, the BioDoseActor, in GATE, using the mMKM and NanOx databases of cell survival predictions as input, to estimate, at a voxel scale, biological outcomes when treating a patient. We modeled the HIBMC 320 MeV/u carbon-ion beam line. We then tested the BioDoseActor for the estimation of biological dose, the relative biological effectiveness (RBE) and the cell survival fraction for the irradiation of the HSG cell line. We then tested the implementation for the prediction of cell survival fraction, RBE and biological dose for the HIBMC 320 MeV/u carbon-ion beamline. For the cell survival fraction, we obtained satisfying results. Concerning the prediction of the biological dose, a 10% relative difference between mMKM and NanOx was reported.

9.
Med Phys ; 49(5): 3457-3469, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-35318686

RESUMEN

PURPOSE: In hadrontherapy, biophysical models can be used to predict the biological effect received by cancerous tissues and organs at risk. The input data of these models generally consist of information on nano/micro dosimetric quantities and, concerning some models, reactive species produced in water radiolysis. In order to fully account for the radiation stochastic effects, these input data have to be provided by Monte Carlo track structure (MCTS) codes allowing to estimate physical, physico-chemical, and chemical effects of radiation at the molecular scale. The objective of this study is to benchmark two MCTS codes, Geant4-DNA and LPCHEM, that are useful codes for estimating the biological effects of ions during radiation therapy treatments. MATERIAL AND METHODS: In this study we considered the simulation of specific energy spectra for monoenergetic proton beams (10 MeV) as well as radiolysis species production for both electron (1 MeV) and proton (10 MeV) beams with Geant4-DNA and LPCHEM codes. Options 2, 4, and 6 of the Geant4-DNA physics lists have been benchmarked against LPCHEM. We compared probability distributions of energy transfer points in cylindrical nanometric targets (10 nm) positioned in a liquid water box. Then, radiochemical species (· OH, e aq - ${\rm{e}}_{{\rm{aq}}}^ - $ , H 3 O + , H 2 O 2 ${{\rm{H}}_3}{{\rm{O}}^ + },{\rm{\;}}{{\rm{H}}_2}{{\rm{O}}_2}$ , H2 , and O H - ) ${\rm{O}}{{\rm{H}}^ - }){\rm{\;}}$ yields simulated between 10-12 and 10-6 s after irradiation are compared. RESULTS: Overall, the specific energy spectra and the chemical yields obtained by the two codes are in good agreement considering the uncertainties on experimental data used to calibrate the parameters of the MCTS codes. For 10 MeV proton beams, ionization and excitation processes are the major contributors to the specific energy deposition (larger than 90%) while attachment, solvation, and vibration processes are minor contributors. LPCHEM simulates tracks with slightly more concentrated energy depositions than Geant4-DNA which translates into slightly faster recombination than Geant4-DNA. Relative deviations (CEV ) with respect to the average of evolution rates of the radical yields between 10-12 and 10-6 s remain below 10%. When comparing execution times between the codes, we showed that LPCHEM is faster than Geant4-DNA by a factor of about four for 1000 primary particles in all simulation stages (physical, physico-chemical, and chemical). In multi-thread mode (four threads), Geant4-DNA computing times are reduced but remain slower than LPCHEM by ∼20% up to ∼50%. CONCLUSIONS: For the first time, the entire physical, physico-chemical, and chemical models of two track structure Monte Carlo codes have been benchmarked along with an extensive analysis on the effects on the water radiolysis simulation. This study opens up new perspectives in using specific energy distributions and radiolytic species yields from monoenergetic ions in biophysical models integrated to Monte Carlo software.


Asunto(s)
Electrones , Protones , Benchmarking , Simulación por Computador , ADN/química , Iones , Método de Montecarlo , Agua/química
10.
Phys Med Biol ; 66(20)2021 10 12.
Artículo en Inglés | MEDLINE | ID: mdl-34555825

RESUMEN

This note addresses an issue faced by every proton computed tomography (CT) reconstruction software: the modelling and the parametrisation of the multiple Coulomb scattering power for the estimation of the most likely path (MLP) of each proton. The conventional approach uses a polynomial model parameterised as a function of depth for a given initial beam energy. This makes it cumbersome to implement a software that works for proton CT data acquired with an arbitrary beam energy or with energy modulation during acquisition. We propose a simple way to parametrise the scattering power based on the measured proton CT list-mode data only and derive a compact expression for the MLP based on a conventional MLP model. Our MLP does not require any parameter. The method assumes the imaged object to be homogeneous, as most conventional MLPs, but requires no information about the material as opposed to most conventional MLP expressions which often assume water to infer energy loss. Instead, our MLP automatically adapts itself to the energy-loss which actually occurred in the object and which is one of the measurements required for proton CT reconstruction. We validate our MLP method numerically and find excellent agreement with conventional MLP methods.


Asunto(s)
Algoritmos , Protones , Método de Montecarlo , Fantasmas de Imagen , Tomografía Computarizada por Rayos X/métodos
11.
Phys Med Biol ; 66(12)2021 06 14.
Artículo en Inglés | MEDLINE | ID: mdl-34020434

RESUMEN

Online ion range monitoring in hadron therapy can be performed via detection of secondary radiation, such as promptγ-rays, emitted during treatment. The promptγemission profile is correlated with the ion depth-dose profile and can be reconstructed via Compton imaging. The line-cone reconstruction, using the intersection between the primary beam trajectory and the cone reconstructed via a Compton camera, requires negligible computation time compared to iterative algorithms. A recent report hypothesised that time of flight (TOF) based discrimination could improve the precision of theγfall-off position (FOP) measured via line-cone reconstruction, where TOF comprises both the proton transit time from the phantom entrance untilγemission, and the flight time of theγ-ray to the detector. The aim of this study was to implement such a method and investigate the influence of temporal resolution on the precision of the FOP. Monte Carlo simulations of a 160 MeV proton beam incident on a homogeneous PMMA phantom were performed using GATE. The Compton camera consisted of a silicon-based scatterer and CeBr3scintillator absorber. The temporal resolution of the detection system (absorber + beam trigger) was varied between 0.1 and 1.3 ns rms and a TOF-based discrimination method applied to eliminate unlikely solution(s) from the line-cone reconstruction. The FOP was obtained for varying temporal resolutions and its precision obtained from its shift across 100 independentγemission profiles compared to a high statistics reference profile. The optimal temporal resolution for the given camera geometry and 108primary protons was 0.2 ns where a precision of 2.30 ± 0.15 mm (1σ) on the FOP was found. This precision is comparable to current state-of-the-art Compton imaging using iterative reconstruction methods or 1D imaging with mechanically collimated devices, and satisfies the requirement of being smaller than the clinical safety margins.


Asunto(s)
Terapia de Protones , Diagnóstico por Imagen , Rayos gamma , Procesamiento de Imagen Asistido por Computador , Método de Montecarlo , Fantasmas de Imagen
12.
Phys Med Biol ; 66(13)2021 06 22.
Artículo en Inglés | MEDLINE | ID: mdl-34020438

RESUMEN

We propose a novel prompt-gamma (PG) imaging modality for real-time monitoring in proton therapy: PG time imaging (PGTI). By measuring the time-of-flight (TOF) between a beam monitor and a PG detector, our goal is to reconstruct the PG vertex distribution in 3D. In this paper, a dedicated, non-iterative reconstruction strategy is proposed (PGTI reconstruction). Here, it was resolved under a 1D approximation to measure a proton range shift along the beam direction. In order to show the potential of PGTI in the transverse plane, a second method, based on the calculation of the centre of gravity (COG) of the TIARA pixel detectors' counts was also explored. The feasibility of PGTI was evaluated in two different scenarios. Under the assumption of a 100 ps (rms) time resolution (achievable in single proton regime), MC simulations showed that a millimetric proton range shift is detectable at 2σwith 108incident protons in simplified simulation settings. With the same proton statistics, a potential 2 mm sensitivity (at 2σwith 108incident protons) to beam displacements in the transverse plane was found using the COG method. This level of precision would allow to act in real-time if the treatment does not conform to the treatment plan. A worst case scenario of a 1 ns (rms) TOF resolution was also considered to demonstrate that a degraded timing information can be compensated by increasing the acquisition statistics: in this case, a 2 mm range shift would be detectable at 2σwith 109incident protons. By showing the feasibility of a time-based algorithm for the reconstruction of the PG vertex distribution for a simplified anatomy, this work poses a theoretical basis for the future development of a PG imaging detector based on the measurement of particle TOF.


Asunto(s)
Terapia de Protones , Diagnóstico por Imagen , Rayos gamma , Método de Montecarlo , Fantasmas de Imagen , Protones
13.
Phys Med Biol ; 65(10): 105010, 2020 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-32143200

RESUMEN

Several direct algorithms have been proposed to take into account the non-linear path of protons in the reconstruction of a proton CT (pCT) image. This paper presents a comparison between five of them, in terms of spatial resolution and relative stopping power (RSP) accuracy. Our comparison includes (1) a distance-driven algorithm extending the filtered backprojection to non-linear trajectories (DD), (2) an algorithm reconstructing a pCT image from optimized projections (ML), (3) a backproject-then-filter approach using a 2D cone filter (BTF), (4) a differentiated backprojection algorithm based on the inversion of the Hilbert transform (DBP), and (5) an algorithm using a 2D directional ramp filter (DR). We have simulated a single tracking pCT set-up using Geant4 through GATE, with a proton source and two position, direction and energy detectors upstream and downstream from the object. Tracker uncertainties were added on the position and direction measurements. A Catphan 528 phantom and a spiral phantom were simulated to measure the spatial resolution and a Gammex 467 phantom was used for the RSP accuracy. Each proton's trajectory was estimated using a most likely path (MLP) formalism. The spatial resolution was evaluated using the frequency corresponding to a modulation transfer function of 10% of its peak value and the RSP accuracy using the mean values in the inserts of the Gammex phantom. In terms of spatial resolution, it was shown that, for ideal trackers, the DR and BTF methods offer a slightly better resolution since each proton is directly binned in the image grid according to its MLP. However, all methods but the ML show comparable resolution when using realistic trackers. Regarding the RSP, three algorithms (DR, DD and BTF) show a mean relative error inside the inserts about 0.1%. As the DR and BTF methods are more computationally expensive, the DD-which allows the same spatial resolution in realistic conditions and the same accuracy-and the DBP-which has a fairly good accuracy (<0.2%) and allows reconstruction from truncated data-can be used for a reduced reconstruction time.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Protones , Tomografía Computarizada por Rayos X , Dinámicas no Lineales , Fantasmas de Imagen
14.
IEEE Trans Med Imaging ; 39(6): 2267-2276, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32011246

RESUMEN

The problem of scattered radiation correction in computed tomography (CT) is well known because scatter induces a bias, a loss of contrast and artifacts. Numerous strategies have been proposed in conventional CT (using energy-integrating detectors) but the problem is still open in the field of spectral CT, a new imaging technique based on energy-selective photon counting detectors. The aim of the present study is to introduce a scatter correction method adapted to multi-energy imaging and based on the use of a primary modulator mask. The main contributions are a correction matrix, which compensates for the effect of the mask, a scatter model based on B-splines and a cost function based on the mask structures and robust to the object structures. The performances of the method have been evaluated on both simulated and experimental data. The mean relative error was reduced from 20% in the lower energy-bins without correction to 4% with the proposed technique, which is close to the error caused by statistical noise.


Asunto(s)
Artefactos , Tomografía Computarizada por Rayos X , Algoritmos , Fantasmas de Imagen , Fotones , Dispersión de Radiación
15.
Phys Med Biol ; 64(6): 065003, 2019 03 08.
Artículo en Inglés | MEDLINE | ID: mdl-30695753

RESUMEN

The use of a most likely path (MLP) formalism for protons to account for the effects of multiple Coulomb scattering has improved the spatial resolution in proton computed tomography (pCT). However, this formalism assumes a homogeneous medium and a continuous scattering of protons. In this paper, we quantify the path prediction error induced by transverse heterogeneities to assess whether correcting for such errors might improve the spatial resolution of pCT. To this end, we have tracked protons trajectories using Monte Carlo simulations in several phantoms with different heterogeneities. Our results show that transverse heterogeneities induce non Gaussian spatial distributions leading to errors in the prediction of the MLP, reaching 0.4 mm in a 20 cm wide simulated heterogeneity and 0.13 mm in a realistic phantom. It was also shown that when the spatial distributions have more than one peak, a most likely path, if any, has yet to be defined. Transverse heterogeneities also affect energy profiles, which could explain some of the artifacts described in other works and could make the energy cuts usually performed to exclude nuclear events less efficient.


Asunto(s)
Método de Montecarlo , Fantasmas de Imagen , Protones , Tomografía Computarizada por Rayos X/métodos , Humanos
16.
Phys Imaging Radiat Oncol ; 6: 20-24, 2018 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-33458384

RESUMEN

The mean excitation energy, I, is an essential quantity for proton treatment planning. This work investigated the feasibility of extracting the spatial distribution of I by combining two computed tomography (CT) modalities, dual-energy CT and proton CT, which provided the spatial distribution of the relative electron density and the stopping power relative to water, respectively. We provided the analytical derivation of I as well as its uncertainty. Results were validated on simulated X-ray and proton CT images of a digital anthropomorphic phantom. Accuracy was below 15% with a large uncertainty, which demonstrated the potential and limits of the technique.

17.
Med Phys ; 44(9): 4548-4558, 2017 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-28675582

RESUMEN

PURPOSE: Dual-energy computed tomography (DECT) has been presented as a valid alternative to single-energy CT to reduce the uncertainty of the conversion of patient CT numbers to proton stopping power ratio (SPR) of tissues relative to water. The aim of this work was to optimize DECT acquisition protocols from simulations of X-ray images for the treatment planning of proton therapy using a projection-based dual-energy decomposition algorithm. METHODS: We have investigated the effect of various voltages and tin filtration combinations on the SPR map accuracy and precision, and the influence of the dose allocation between the low-energy (LE) and the high-energy (HE) acquisitions. For all spectra combinations, virtual CT projections of the Gammex phantom were simulated with a realistic energy-integrating detector response model. Two situations were simulated: an ideal case without noise (infinite dose) and a realistic situation with Poisson noise corresponding to a 20 mGy total central dose. To determine the optimal dose balance, the proportion of LE-dose with respect to the total dose was varied from 10% to 90% while keeping the central dose constant, for four dual-energy spectra. SPR images were derived using a two-step projection-based decomposition approach. The ranges of 70 MeV, 90 MeV, and 100 MeV proton beams onto the adult female (AF) reference computational phantom of the ICRP were analytically determined from the reconstructed SPR maps. RESULTS: The energy separation between the incident spectra had a strong impact on the SPR precision. Maximizing the incident energy gap reduced image noise. However, the energy gap was not a good metric to evaluate the accuracy of the SPR. In terms of SPR accuracy, a large variability of the optimal spectra was observed when studying each phantom material separately. The SPR accuracy was almost flat in the 30-70% LE-dose range, while the precision showed a minimum slightly shifted in favor of lower LE-dose. Photon noise in the SPR images (20 mGy dose) had lower impact on the proton range accuracy as comparable results were obtained for the noiseless situation (infinite dose). Root-mean-square range errors averaged over all irradiation angles associated to dual-energy imaging were comprised between 0.50 mm and 0.72 mm for the noiseless situation and between 0.51 mm and 0.77 mm for the realistic scenario. CONCLUSIONS: The impact of the dual-energy spectra and the dose allocation between energy levels on the SPR accuracy and precision determined through a projection-based dual-energy algorithm were evaluated to guide the choice of spectra for dual-energy CT for proton therapy. The dose balance between energy levels was not found to be sensitive for the SPR estimation. The optimal pair of dual-energy spectra was material dependent but on a heterogeneous anthropomorphic phantom, there was no significant difference in range accuracy and the choice of spectra could be driven by the precision, i.e., the energy gap.


Asunto(s)
Terapia de Protones , Tomografía Computarizada por Rayos X , Algoritmos , Humanos , Fantasmas de Imagen , Protones
18.
J Xray Sci Technol ; 2017 Apr 05.
Artículo en Inglés | MEDLINE | ID: mdl-28387696

RESUMEN

One of the well-recognized challenge of Cone-Beam Computed Tomography (CBCT) is scatter contamination within the projection images. Scatter degrades the image quality by decreasing the contrast, introducing cupping and shading artifacts and thus leading to inaccuracies in the reconstructed values. The higher scatter to primary ratio experienced in industrial applications leads to even more important artifacts. Various strategies have been investigated to manage the scatter signal in CBCT projection data. One of these strategies is to calculate the scatter intensity by deconvolution of primary intensity using Scatter Kernel Superposition (SKS). In this paper, we present an approach combining experimental measurements and Monte Carlo simulations to estimate the scatter kernels for industrial applications based on the continuously thickness-adapted kernels strategy with a four-Gaussian modeling of kernels. We compare this approach with an experimental technique based on a two-Gaussian modeling of the kernels. The results obtained prove the superiority of a four-Gaussian model to effectively take into account both the contribution of object and detector scattering as compared to a two-Gaussian approach. We also present the parameterisation of the scatter kernels with respect to object to detector distance. This approach facilitates the use of a single geometry for calculation of scatter kernels over the whole magnification range of the acquisition setup.

19.
J Xray Sci Technol ; 24(5): 723-732, 2016 10 06.
Artículo en Inglés | MEDLINE | ID: mdl-27716681

RESUMEN

Due to the increased cone beam coverage and the introduction of flat panel detector, the size of X-ray illumination fields has grown dramatically in Cone Beam Computed Tomography (CBCT), causing an increase in scatter radiation. Existing reconstruction algorithms do not model the scatter radiation, so scatter artifacts appear in the reconstruction images. The contribution of scattering of photons inside the detector itself becomes prominent and challenging in case of X-ray source of high energy (over a few 100 keV) which is used in typical industrial Non Destructive Testing (NDT). In this paper, comprehensive evaluation of contribution of detector scatter is performed using continuously thickness-adapted kernels. A separation of scatter due to object and the detector is presented using a four-Gaussian model. The results obtained prove that the scatter correction only due to the object is not sufficient to obtain reconstruction image free from artifacts as the detector also scatters considerably. The obtained results are also validated experimentally using a collimator to remove the contribution of object scatter.


Asunto(s)
Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Artefactos , Diseño de Equipo , Modelos Estadísticos
20.
Med Phys ; 43(9): 5199, 2016 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-27587051

RESUMEN

PURPOSE: The aim of this work is to propose a general and simple procedure for the calibration and validation of kilo-voltage cone-beam CT (kV CBCT) models against experimental data. METHODS: The calibration and validation of the CT model is a two-step procedure: the source model then the detector model. The source is described by the direction dependent photon energy spectrum at each voltage while the detector is described by the pixel intensity value as a function of the direction and the energy of incident photons. The measurements for the source consist of a series of dose measurements in air performed at each voltage with varying filter thicknesses and materials in front of the x-ray tube. The measurements for the detector are acquisitions of projection images using the same filters and several tube voltages. The proposed procedure has been applied to calibrate and assess the accuracy of simple models of the source and the detector of three commercial kV CBCT units. If the CBCT system models had been calibrated differently, the current procedure would have been exclusively used to validate the models. Several high-purity attenuation filters of aluminum, copper, and silver combined with a dosimeter which is sensitive to the range of voltages of interest were used. A sensitivity analysis of the model has also been conducted for each parameter of the source and the detector models. RESULTS: Average deviations between experimental and theoretical dose values are below 1.5% after calibration for the three x-ray sources. The predicted energy deposited in the detector agrees with experimental data within 4% for all imaging systems. CONCLUSIONS: The authors developed and applied an experimental procedure to calibrate and validate any model of the source and the detector of a CBCT unit. The present protocol has been successfully applied to three x-ray imaging systems. The minimum requirements in terms of material and equipment would make its implementation suitable in most clinical environments.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Modelos Teóricos , Calibración
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...