ABSTRACT
BACKGROUND: In recent years, deep reinforcement learning (RL) has been applied to various medical tasks and produced encouraging results. OBJECTIVE: In this paper, we demonstrate the feasibility of deep RL for denoising simulated deep-silicon photon-counting CT (PCCT) data in both full and interior scan modes. PCCT offers higher spatial and spectral resolution than conventional CT, requiring advanced denoising methods to suppress noise increase. METHODS: In this work, we apply a dueling double deep Q network (DDDQN) to denoise PCCT data for maximum contrast-to-noise ratio (CNR) and a multi-agent approach to handle data non-stationarity. RESULTS: Using our method, we obtained significant image quality improvement for single-channel scans and consistent improvement for all three channels of multichannel scans. For the single-channel interior scans, the PSNR (dB) and SSIM increased from 33.4078 and 0.9165 to 37.4167 and 0.9790 respectively. For the multichannel interior scans, the channel-wise PSNR (dB) increased from 31.2348, 30.7114, and 30.4667 to 31.6182, 30.9783, and 30.8427 respectively. Similarly, the SSIM improved from 0.9415, 0.9445, and 0.9336 to 0.9504, 0.9493, and 0.0326 respectively. CONCLUSIONS: Our results show that the RL approach improves image quality effectively, efficiently, and consistently across multiple spectral channels and has great potential in clinical applications.
Subject(s)
Algorithms , Silicon , X-Rays , Signal-To-Noise Ratio , Tomography, X-Ray Computed/methods , Image Processing, Computer-Assisted/methodsABSTRACT
Dual-energy computed tomography (DECT) acquires two x-ray projection datasets with different x-ray energy spectra, performs material-specific image reconstruction based on the energy-dependent non-linear integral model, and provides more accurate quantification of attenuation coefficients than single energy spectrum CT. In the diagnostic energy range, x-ray energy-dependent attenuation is mainly caused by photoelectric absorption and Compton scattering. Theoretically, these two physical components of the x-ray attenuation mechanism can be determined from two projection datasets with distinct energy spectra. Practically, the solution of the non-linear integral equation is complicated due to spectral uncertainty, detector sensitivity, and data noise. Conventional multivariable optimization methods are prone to local minima. In this paper, we develop a new method for DECT image reconstruction in the projection domain. This method combines an analytic solution of a polynomial equation and a univariate optimization to solve the polychromatic non-linear integral equation. The polynomial equation of an odd order has a unique real solution with sufficient accuracy for image reconstruction, and the univariate optimization can achieve the global optimal solution, allowing accurate and stable projection decomposition for DECT. Numerical and physical phantom experiments are performed to demonstrate the effectiveness of the method in comparison with the state-of-the-art projection decomposition methods. As a result, the univariate optimization method yields a quality improvement of 15% for image reconstruction and substantial reduction of the computational time, as compared to the multivariable optimization methods.
ABSTRACT
Contrast-enhanced computed tomography (CECT) helps enhance the visibility for tumor imaging. When a high-Z contrast agent interacts with X-rays across its K-edge, X-ray photoelectric absorption would experience a sudden increment, resulting in a significant difference of the X-ray transmission intensity between the left and right energy windows of the K-edge. Using photon-counting detectors, the X-ray intensity data in the left and right windows of the K-edge can be measured simultaneously. The differential information of the two kinds of intensity data reflects the contrast-agent concentration distribution. K-edge differences between various matters allow opportunities for the identification of contrast agents in biomedical applications. In this paper, a general radon transform is established to link the contrast-agent concentration to X-ray intensity measurement data. An iterative algorithm is proposed to reconstruct a contrast-agent distribution and tissue attenuation background simultaneously. Comprehensive numerical simulations are performed to demonstrate the merits of the proposed method over the existing K-edge imaging methods. Our results show that the proposed method accurately quantifies a distribution of a contrast agent, optimizing the contrast-to-noise ratio at a high dose efficiency.
ABSTRACT
Extremely low-dose CT acquisitions used for PET attenuation correction have high levels of noise and potential bias artifacts due to photon starvation. This work explores the use of a priori knowledge for iterative image reconstruction of the CT-based attenuation map. We investigate a maximum a posteriori framework with cluster-based multinomial penalty for direct iterative coordinate decent (dICD) reconstruction of the PET attenuation map. The objective function for direct iterative attenuation map reconstruction used a Poisson log-likelihood data fit term and evaluated two image penalty terms of spatial and mixture distributions. The spatial regularization is based on a quadratic penalty. For the mixture penalty, we assumed that the attenuation map may consist of four material clusters: air+background, lung, soft tissue, and bone. Using simulated noisy sinogram data, dICD reconstruction was performed with different strengths of the spatial and mixture penalties. The combined spatial and mixture penalties reduced the RMSE by roughly 2 times compared to a weighted least square and filtered backprojection reconstruction of CT images. The combined spatial and mixture penalties resulted in only slightly lower RMSE compared to a spatial quadratic penalty alone. For direct PET attenuation map reconstruction from ultra-low dose CT acquisitions, the combination of spatial and mixture penalties offers regularization of both variance and bias and is a potential method to reconstruct attenuation maps with negligible patient dose. The presented results, using a best-case histogram suggest that the mixture penalty does not offer a substantive benefit over conventional quadratic regularization and diminishes enthusiasm for exploring future application of the mixture penalty.
ABSTRACT
Managing and optimizing radiation dose has become a core problem for the CT community. As a fundamental step for dose optimization, accurate and computationally efficient dose estimates are crucial. The purpose of this study was to devise a computationally efficient projection-based dose metric. The absorbed energy and object mass were individually modeled using the projection data. The absorbed energy was estimated using the difference between intensity of the primary photon and the exit photon. The mass was estimated using the volume under the attenuation profile. The feasibility of the approach was evaluated across phantoms with a broad size range, various kVp settings, and two bowtie filters, using a simulation tool, the Computer Assisted Tomography SIMulator (CATSIM) software. The accuracy of projection-based dose estimation was validated against Monte Carlo (MC) simulations. The relationship between projection-based dose metric and MC dose estimate was evaluated using regression models. The projection-based dose metric showed a strong correlation with Monte Carlo dose estimates (R (2) > 0.94). The prediction errors for the projection-based dose metric were all below 15 %. This study demonstrated the feasibility of computationally efficient dose estimation requiring only the projection data.
Subject(s)
Radiation Dosage , Tomography, X-Ray Computed , Computer Simulation , Humans , Monte Carlo Method , Phantoms, Imaging , Radiometry/methods , SoftwareABSTRACT
BACKGROUND: We are interested in exploring dedicated, high-performance cardiac CT systems optimized to provide the best tradeoff between system cost, image quality, and radiation dose. OBJECTIVE: We sought to identify and evaluate a broad range of CT architectures that could provide an optimal, dedicated cardiac CT solution. METHODS: We identified and evaluated thirty candidate architectures using consistent design choices. We defined specific evaluation metrics related to cost and performance. We then scored the candidates versus the defined metrics. Lastly, we applied a weighting system to combine scores for all metrics into a single overall score for each architecture. CT experts with backgrounds in cardiovascular radiology, x-ray physics, CT hardware and CT algorithms performed the scoring and weighting. RESULTS: We found nearly a twofold difference between the most and the least promising candidate architectures. Architectures employed by contemporary commercial diagnostic CT systems were among the highest-scoring candidates. We identified six architectures that show sufficient promise to merit further in-depth analysis and comparison. CONCLUSION: Our results suggest that contemporary diagnostic CT system architectures outperform most other candidates that we evaluated, but the results for a few alternatives were relatively close. We selected six representative high-scoring candidates for more detailed design and further comparative evaluation.
Subject(s)
Cardiac Imaging Techniques/methods , Tomography, X-Ray Computed/methods , Cardiovascular Diseases/diagnostic imaging , HumansABSTRACT
A multi-source inverse-geometry CT (MS-IGCT) system consists of a small 2D detector array and multiple x-ray sources. During data acquisition, each source is activated sequentially, and may have random source intensity fluctuations relative to their respective nominal intensity. While a conventional 3rd generation CT system uses a reference channel to monitor the source intensity fluctuation, the MS-IGCT system source illuminates a small portion of the entire field-of-view (FOV). Therefore, it is difficult for all sources to illuminate the reference channel and the projection data computed by standard normalization using flat field data of each source contains error and can cause significant artifacts. In this work, we present a raw data normalization algorithm to reduce the image artifacts caused by source intensity fluctuation. The proposed method was tested using computer simulations with a uniform water phantom and a Shepp-Logan phantom, and experimental data of an ice-filled PMMA phantom and a rabbit. The effect on image resolution and robustness of the noise were tested using MTF and standard deviation of the reconstructed noise image. With the intensity fluctuation and no correction, reconstructed images from simulation and experimental data show high frequency artifacts and ring artifacts which are removed effectively using the proposed method. It is also observed that the proposed method does not degrade the image resolution and is very robust to the presence of noise.
Subject(s)
Statistics as Topic , Tomography, X-Ray Computed/methods , Computer Simulation , Phantoms, Imaging , Radiographic Image Interpretation, Computer-AssistedABSTRACT
It is well known that CT projections are redundant. Over the past decades, significant efforts have been devoted to characterize the data redundancy in different aspects. Very recently, Clackdoyle and Desbat reported a new integral-type data consistency condition (DCC) for truncated 2D parallel-beam projections, which can be applied to a region inside a field of view (FOV) but outside of the convex hull of the compact support of an object. Inspired by their work, here we derive a more general condition for 2D fan-beam geometry with a general scanning trajectory. This extended DCC is verified with simulated projections of the Shepp-Logan phantom and a clinically collected sinogram. Then, we demonstrate an application of the proposed DCC.
Subject(s)
Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Artifacts , Computer Simulation , Humans , Phantoms, ImagingABSTRACT
Objective. We sought to systematically evaluate CatSim's ability to accurately simulate the spatial resolution produced by a typical 64-detector-row clinical CT scanner in the projection and image domains, over the range of clinically used x-ray techniques.Approach.Using a 64-detector-row clinical scanner, we scanned two phantoms designed to evaluate spatial resolution in the projection and image domains. These empirical scans were performed over the standard clinically used range of x-ray techniques (kV, and mA). We extracted projection data from the scanner, and we reconstructed images. For the CatSim simulations, we developed digital phantoms to represent the phantoms used in the empirical scans. We developed a new, realistic model for the x-ray source focal spot, and we empirically tuned a published model for the x-ray detector temporal response. We applied these phantoms and models to simulate scans equivalent to the empirical scans, and we reconstructed the simulated projections using the same methods used for the empirical scans. For the empirical and simulated scans, we qualitatively and quantitatively compared the projection-domain and image-domain point-spread functions (PSFs) as well as the image-domain modulation transfer functions. We reported four quantitative metrics and the percent error between the empirical and simulated results.Main Results.Qualitatively, the PSFs matched well in both the projection and image domains. Quantitatively, all four metrics generally agreed well, with most of the average errors substantially less than 5% for all x-ray techniques. Although the errors tended to increase with decreasing kV, we found that the CatSim simulations agreed with the empirical scans within limits required for the anticipated applications of CatSim.Significance.The new focal spot model and the new detector temporal response model are significant contributions to CatSim because they enabled achieving the desired level of agreement between empirical and simulated results. With these new models and this validation, CatSim users can be confident that the spatial resolution represented by simulations faithfully represents results that would be obtained by a real scanner, within reasonable, known limits. Furthermore, users of CatSim can vary parameters including but not limited to system geometry, focal spot size/shape and detector parameters, beyond the values available in physical scanners, and be confident in the results. Therefore, CatSim can be used to explore new hardware designs as well as new scanning and reconstruction methods, thus enabling acceleration of improved CT scan capabilities.
Subject(s)
Algorithms , Tomography, X-Ray Computed , Tomography, X-Ray Computed/methods , Computer Simulation , Tomography Scanners, X-Ray Computed , Phantoms, Imaging , X-RaysABSTRACT
Early diagnosis and accurate prognosis of colorectal cancer is critical for determining optimal treatment plans and maximizing patient outcomes, especially as the disease progresses into liver metastases. Computed tomography (CT) is a frontline tool for this task; however, the preservation of predictive radiomic features is highly dependent on the scanning protocol and reconstruction algorithm. We hypothesized that image reconstruction with a high-frequency kernel could result in a better characterization of liver metastases features via deep neural networks. This kernel produces images that appear noisier but preserve more sinogram information. A simulation pipeline was developed to study the effects of imaging parameters on the ability to characterize the features of liver metastases. This pipeline utilizes a fractal approach to generate a diverse population of shapes representing virtual metastases, and then it superimposes them on a realistic CT liver region to perform a virtual CT scan using CatSim. Datasets of 10,000 liver metastases were generated, scanned, and reconstructed using either standard or high-frequency kernels. These data were used to train and validate deep neural networks to recover crafted metastases characteristics, such as internal heterogeneity, edge sharpness, and edge fractal dimension. In the absence of noise, models scored, on average, 12.2% ( α = 0.012 ) and 7.5% ( α = 0.049 ) lower squared error for characterizing edge sharpness and fractal dimension, respectively, when using high-frequency reconstructions compared to standard. However, the differences in performance were statistically insignificant when a typical level of CT noise was simulated in the clinical scan. Our results suggest that high-frequency reconstruction kernels can better preserve information for downstream artificial intelligence-based radiomic characterization, provided that noise is limited. Future work should investigate the information-preserving kernels in datasets with clinical labels.
ABSTRACT
BACKGROUND: Cardiac computed tomography (CT) exams are some of the most complex CT exams due to the need to carefully time the scan when the heart chambers are near the peak contrast concentration. With current "bolus tracking" and "timing bolus" techniques, after contrast medium is injected, a target vessel or chamber is scanned periodically, and images are reconstructed to monitor the opacification. Both techniques have opportunities for improvement, such as reducing the contrast medium volume, the exam time, the number of manual steps, and improving the robustness of correctly timing the peak opacification. PURPOSE: The objective of our study is to (1) develop a novel autonomous cardiac CT clinical workflow to track contrast bolus dynamics directly from pulsed x-ray projections, (2) develop a new five-dimensional virtual cardiac CT data generation tool with programmable cardiac profiles and bolus dynamics, and (3) demonstrate the feasibility of projection-domain prospective bolus tracking using a neural network trained and tested with the virtual data to find the contrast peak. METHODS: In our proposed workflow, pulsed mode projections (PMPs) are acquired with a wide-open collimator under sparse view conditions (monitoring phase). Each time a new PMP is acquired, the neural network is used to estimate the contrast enhancement inside the target chambers. To train such a network, we introduce a new approach to generate clinically realistic virtual scan data based on a five-dimensional cardiac model, by synthesizing user-defined contrast bolus dynamics and patient electrocardiogram profiles. In this study, we investigated a scenario with one single PMP per rotation. To find the optimal PMP view angle, 20 angles were explored. For each angle, 300 virtual exams were generated from 115 human subject datasets and divided into training, validation, and testing groups. Twenty neural networks were trained and evaluated in total to find the optimal network. Finally, a simple bolus peak time estimation algorithm was developed and evaluated by comparing to the ground truth bolus peak time. RESULTS: To evaluate the accuracy of a bolus time-intensity curve estimated by the network, the cosine similarity between the estimation and the ground truth was computed. The cosine similarity was larger than 0.97 for all projection angles. A view angle corresponding to the x-ray tube at 30 degrees from vertical (left-anterior of subject) showed the lowest errors. The amplitude of the estimated bolus curves (in Hounsfield Units) was not always correctly predicted, but the shape was accurately predicted. This resulted in an RMSE of 1.23 s for the left chambers and 0.78 s for the right chambers in the contrast peak time estimation. CONCLUSION: In this study, we proposed an innovative real-time way to predict the contrast bolus peak in cardiac CT as well as an innovative approach to train a neural network using virtual but clinically realistic data. Our trained network successfully estimated the shape of the time-intensity curve for the target chambers, which led to accurate bolus peak time estimation. This technique could be used for autonomous diagnostic cardiac CT to trigger a diagnostic scan for optimal contrast enhancement.
ABSTRACT
BACKGROUND: Recent photon-counting computed tomography (PCCT) development brings great opportunities for plaque characterization with much-improved spatial resolution and spectral imaging capability. While existing coronary plaque PCCT imaging results are based on CZT- or CdTe-materials detectors, deep-silicon photon-counting detectors offer unique performance characteristics and promise distinct imaging capabilities. PURPOSE: This study aims to numerically investigate the feasibility of characterizing plaques with a deep-silicon PCCT scanner and to demonstrate its potential performance advantages over traditional CT scanners using energy-integrating detectors (EID). METHODS: We conducted a systematic simulation study of a deep-silicon PCCT scanner using a newly developed digital plaque phantom with clinically relevant geometrical and chemical properties. Through qualitative and quantitative evaluations, this study investigates the effects of spatial resolution, noise, and motion artifacts on plaque imaging. RESULTS: Noise-free simulations indicated that PCCT imaging could delineate the boundary of necrotic cores with a much finer resolution than EID-CT imaging, achieving a structural similarity index metric (SSIM) score of 0.970 and reducing the root mean squared error (RMSE) by two-thirds. Measuring necrotic core area errors were reduced from 91.5% to 24%, and fibrous cap thickness errors were reduced from 349.8% to 33.3%. In the presence of noise, the optimal reconstruction was achieved using 0.25 mm voxels and a soft reconstruction kernel, yielding the highest contrast-to-noise ratio (CNR) of 3.48 for necrotic core detection and the best image quality metrics among all choices. However, the ultrahigh resolution of PCCT increased sensitivity to motion artifacts, which could be mitigated by keeping residual motion amplitude below 0.4 mm. CONCLUSIONS: The findings suggest that deep-silicon PCCT scanner can offer sufficient spatial resolution and tissue contrast for effective plaque characterization, potentially improving diagnostic accuracy in cardiovascular imaging, provided image noise and motion blur can be mitigated using advanced algorithms. This simulation study involves several simplifications, which may result in some idealized outcomes that do not directly translate to clinical practice. Further validation studies with physical scans are necessary and will be considered for future work.
ABSTRACT
The presence of metal objects leads to corrupted CT projection measurements, resulting in metal artifacts in the reconstructed CT images. AI promises to offer improved solutions to estimate missing sinogram data for metal artifact reduction (MAR), as previously shown with convolutional neural networks (CNNs) and generative adversarial networks (GANs). Recently, denoising diffusion probabilistic models (DDPM) have shown great promise in image generation tasks, potentially outperforming GANs. In this study, a DDPM-based approach is proposed for inpainting of missing sinogram data for improved MAR. The proposed model is unconditionally trained, free from information on metal objects, which can potentially enhance its generalization capabilities across different types of metal implants compared to conditionally trained approaches. The performance of the proposed technique was evaluated and compared to the state-of-the-art normalized MAR (NMAR) approach as well as to CNN-based and GAN-based MAR approaches. The DDPM-based approach provided significantly higher SSIM and PSNR, as compared to NMAR (SSIM: p [Formula: see text]; PSNR: p [Formula: see text]), the CNN (SSIM: p [Formula: see text]; PSNR: p [Formula: see text]) and the GAN (SSIM: p [Formula: see text]; PSNR: p <0.05) methods. The DDPM-MAR technique was further evaluated based on clinically relevant image quality metrics on clinical CT images with virtually introduced metal objects and metal artifacts, demonstrating superior quality relative to the other three models. In general, the AI-based techniques showed improved MAR performance compared to the non-AI-based NMAR approach. The proposed methodology shows promise in enhancing the effectiveness of MAR, and therefore improving the diagnostic accuracy of CT.
Subject(s)
Algorithms , Artifacts , Metals , Models, Statistical , Tomography, X-Ray Computed , Tomography, X-Ray Computed/methods , Humans , Image Processing, Computer-Assisted/methods , Phantoms, Imaging , Neural Networks, ComputerABSTRACT
PURPOSE: In this report, the authors introduce the general concept of the completeness map, as a means to evaluate the completeness of data acquired by a given CT system design (architecture and scan mode). They illustrate the utility of completeness map by applying the completeness map concept to a number of candidate CT system designs, as part of a study to advance the state-of-the-art in cardiac CT. METHODS: In order to optimally reconstruct a point within a volume of interest (VOI), the Radon transform on all possible planes through that point should be measured. The authors quantified the extent to which this ideal condition is satisfied for the entire image volume. They first determined a Radon completeness number for each point in the VOI, as the percentage of possible planes that is actually measured. A completeness map is then defined as a 3D matrix of the completeness numbers for the entire VOI. The authors proposed algorithms to analyze the projection datasets in Radon space and compute the completeness number for a fixed point and apply these algorithms to various architectures and scan modes that they are evaluating. In this report, the authors consider four selected candidate architectures, operating with different scan modes, for a total of five system design alternatives. Each of these alternatives is evaluated using completeness map. RESULTS: If the detector size and cone angle are large enough to cover the entire cardiac VOI, a single-source circular scan can have ≥99% completeness over the entire VOI. However, only the central z-slice can be exactly reconstructed, which corresponds to 100% completeness. For a typical single-source architecture, if the detector is limited to an axial dimension of 40 mm, a helical scan needs about five rotations to form an exact reconstruction region covering the cardiac VOI, while a triple-source helical scan only requires two rotations, leading to a 2.5x improvement in temporal resolution. If the source and detector of an inverse-geometry (IGCT) system have the same axial extent, and the spacing of source points in the axial and transaxial directions is sufficiently small, the IGCT can also form an exact reconstruction region for the cardiac VOI. If the VOI can be covered by the x-ray beam in any view, a composite-circling scan can generate an exact reconstruction region covering the VOI. CONCLUSIONS: The completeness map evaluation provides useful information for selecting the next-generation cardiac CT system design. The proposed completeness map method provides a practical tool for analyzing complex scanning trajectories, where the theoretical image quality for some complex system designs is impossible to predict, without yet-undeveloped reconstruction algorithms.
Subject(s)
Heart/diagnostic imaging , Tomography, X-Ray Computed/methods , Equipment Design , Tomography, X-Ray Computed/instrumentationABSTRACT
Deep learning (DL) has shown unprecedented performance for many image analysis and image enhancement tasks. Yet, solving large-scale inverse problems like tomographic reconstruction remains challenging for DL. These problems involve non-local and space-variant integral transforms between the input and output domains, for which no efficient neural network models are readily available. A prior attempt to solve tomographic reconstruction problems with supervised learning relied on a brute-force fully connected network and only allowed reconstruction with a 1284 system matrix size. This cannot practically scale to realistic data sizes such as 5124 and 5126 for three-dimensional datasets. Here we present a novel framework to solve such problems with DL by casting the original problem as a continuum of intermediate representations between the input and output domains. The original problem is broken down into a sequence of simpler transformations that can be well mapped onto an efficient hierarchical network architecture, with exponentially fewer parameters than a fully connected network would need. We applied the approach to computed tomography (CT) image reconstruction for a 5124 system matrix size. This work introduces a new kind of data-driven DL solver for full-size CT reconstruction without relying on the structure of direct (analytical) or iterative (numerical) inversion techniques. This work presents a feasibility demonstration of full-scale learnt reconstruction, whereas more developments will be needed to demonstrate superiority relative to traditional reconstruction approaches. The proposed approach is also extendable to other imaging problems such as emission and magnetic resonance reconstruction. More broadly, hierarchical DL opens the door to a new class of solvers for general inverse problems, which could potentially lead to improved signal-to-noise ratio, spatial resolution and computational efficiency in various areas.
ABSTRACT
Objective. X-ray-based imaging modalities including mammography and computed tomography (CT) are widely used in cancer screening, diagnosis, staging, treatment planning, and therapy response monitoring. Over the past few decades, improvements to these modalities have resulted in substantially improved efficacy and efficiency, and substantially reduced radiation dose and cost. However, such improvements have evolved more slowly than would be ideal because lengthy preclinical and clinical evaluation is required. In many cases, new ideas cannot be evaluated due to the high cost of fabricating and testing prototypes. Wider availability of computer simulation tools could accelerate development of new imaging technologies. This paper introduces the development of a new open-access simulation environment for x-ray-based imaging. The main motivation of this work is to publicly distribute a fast but accurate ray-tracing x-ray and CT simulation tool along with realistic phantoms and 3D reconstruction capability, building on decades of developments in industry and academia.Approach. The x-ray-based Cancer Imaging Simulation Toolkit (XCIST) is developed in the context of cancer imaging, but can more broadly be applied. XCIST is physics-based, written in Python and C/C++, and currently consists of three major subsets: digital phantoms, the simulator itself (CatSim), and image reconstruction algorithms; planned future features include a fast dose-estimation tool and rigorous validation. To enable broad usage and to model and evaluate new technologies, XCIST is easily extendable by other researchers. To demonstrate XCIST's ability to produce realistic images and to show the benefits of using XCIST for insight into the impact of separate physics effects on image quality, we present exemplary simulations by varying contributing factors such as noise and sampling.Main results. The capabilities and flexibility of XCIST are demonstrated, showing easy applicability to specific simulation problems. Geometric and x-ray attenuation accuracy are shown, as well as XCIST's ability to model multiple scanner and protocol parameters, and to attribute fundamental image quality characteristics to specific parameters.Significance. This work represents an important first step toward the goal of creating an open-access platform for simulating existing and emerging x-ray-based imaging systems. While numerous simulation tools exist, we believe the combined XCIST toolset provides a unique advantage in terms of modeling capabilities versus ease of use and compute time. We publicly share this toolset to provide an environment for scientists to accelerate and improve the relevance of their research in x-ray and CT.
Subject(s)
Access to Information , Tomography, X-Ray Computed , Algorithms , Computer Simulation , Image Processing, Computer-Assisted/methods , Phantoms, Imaging , Tomography, X-Ray Computed/methods , X-RaysABSTRACT
This review paper aims to summarize cardiac CT blooming artifacts, how they present clinically and what their root causes and potential solutions are. A literature survey was performed covering any publications with a specific interest in calcium blooming and stent blooming in cardiac CT. The claims from literature are compared and interpreted, aiming at narrowing down the root causes and most promising solutions for blooming artifacts. More than 30 journal publications were identified with specific relevance to blooming artifacts. The main reported causes of blooming artifacts are the partial volume effect, motion artifacts and beam hardening. The proposed solutions are classified as high-resolution CT hardware, high-resolution CT reconstruction, subtraction techniques and post-processing techniques, with a special emphasis on deep learning (DL) techniques. The partial volume effect is the leading cause of blooming artifacts. The partial volume effect can be minimized by increasing the CT spatial resolution through higher-resolution CT hardware or advanced high-resolution CT reconstruction. In addition, DL techniques have shown great promise to correct for blooming artifacts. A combination of these techniques could avoid repeat scans for subtraction techniques.
ABSTRACT
X-ray computed tomography (CT) is a nondestructive imaging technique to reconstruct cross-sectional images of an object using x-ray measurements taken from different view angles for medical diagnosis, therapeutic planning, security screening, and other applications. In clinical practice, the x-ray tube emits polychromatic x-rays, and the x-ray detector array operates in the energy-integrating mode to acquire energy intensity. This physical process of x-ray imaging is accurately described by an energy-dependent non-linear integral equation on the basis of the Beer-Lambert law. However, the non-linear model is not invertible using a computationally efficient solution and is often approximated as a linear integral model in the form of the Radon transform, which basically loses energy-dependent information. This approximate model produces an inaccurate quantification of attenuation images, suffering from beam-hardening effects. In this paper, a machine learning-based approach is proposed to correct the model mismatch to achieve quantitative CT imaging. Specifically, a one-dimensional network model is proposed to learn a non-linear transform from a training dataset to map a polychromatic CT image to its monochromatic sinogram at a pre-specified energy level, realizing virtual monochromatic (VM) imaging effectively and efficiently. Our results show that the proposed method recovers high-quality monochromatic projections with an average relative error of less than 2%. The resultant x-ray VM imaging can be applied for beam-hardening correction, material differentiation and tissue characterization, and proton therapy treatment planning.
ABSTRACT
PURPOSE: To develop a tool to produce accurate, well-validated x-ray spectra for standalone use or for use in an open-access x-ray/CT simulation tool. Spectrum models will be developed for tube voltages in the range of 80 kVp through 140 kVp and for anode takeoff angles in the range of 5° to 9°. METHODS: Spectra were initialized based on physics models, then refined using empirical measurements, as follows. A new spectrum-parameterization method was developed, including 13 spline knots to represent the bremsstrahlung component and 4 values to represent characteristic lines. Initial spectra at 80, 100, 120, and 140 kVp and at takeoff angles from 5° to 9° were produced using physics-based spectrum estimation tools XSPECT and SpekPy. Empirical experiments were systematically designed with careful selection of attenuator materials and thicknesses, and by reducing measurement contamination from scatter to <1%. Measurements were made on a 64-row CT scanner using the scanner's detector and using multiple layers of polymethylmethacrylate (PMMA), aluminum, titanium, tin, and neodymium. Measurements were made at 80, 100, 120, and 140 kVp and covering the entire 64-row detector (takeoff angles from 5° to 9°); a total of 6,144 unique measurements were made. After accounting for the detector's energy response, parameterized representations of the initial spectra were refined for best agreement with measurements using two proposed optimization schemes: based on modulation and based on gradient descent. X-ray transmission errors were computed for measurements vs calculations using the nonoptimized and optimized spectra. Half-value, tenth-value, and hundredth-value layers for PMMA, Al, and Ti were calculated. RESULTS: Spectra before and after parameterization were in excellent agreement (e.g., R2 values of 0.995 and 0.997). Empirical measurements produced smoothly varying curves with x-ray transmission covering a range of up to 3.5 orders of magnitude. Spectra from the two optimization schemes, compared with the unoptimized physic-based spectra, each improved agreement with measurements by twofold through tenfold, for both postlog transmission data and for fractional value layers. CONCLUSION: The resulting well-validated spectra are appropriate for use in the open-access x-ray/CT simulator under development, the x-ray-based Cancer Imaging Toolkit (XCIST), or for standalone use. These spectra can be readily interpolated to produce spectra at arbitrary kVps over the range of 80 to 140 kVp and arbitrary takeoff angles over the range of 5° to 9°. Furthermore, interpolated spectra over these ranges can be obtained by applying the standalone Matlab function available at https://github.com/xcist/documentation/blob/master/XCISTspectrum.m.
Subject(s)
Models, Theoretical , Tomography, X-Ray Computed , Computer Simulation , Tomography Scanners, X-Ray Computed , X-RaysABSTRACT
Conventional single-spectrum computed tomography (CT) reconstructs a spectrally integrated attenuation image and reveals tissues morphology without any information about the elemental composition of the tissues. Dual-energy CT (DECT) acquires two spectrally distinct datasets and reconstructs energy-selective (virtual monoenergetic [VM]) and material-selective (material decomposition) images. However, DECT increases system complexity and radiation dose compared with single-spectrum CT. In this paper, a deep learning approach is presented to produce VM images from single-spectrum CT images. Specifically, a modified residual neural network (ResNet) model is developed to map single-spectrum CT images to VM images at pre-specified energy levels. This network is trained on clinical DECT data and shows excellent convergence behavior and image accuracy compared with VM images produced by DECT. The trained model produces high-quality approximations of VM images with a relative error of less than 2%. This method enables multi-material decomposition into three tissue classes, with accuracy comparable with DECT.