RESUMEN
Background Photon-counting CT (PCCT) represents a recent advancement in CT, offering improved spatial resolution and spectral separability. By using multiple adjustable energy bins, PCCT enables K-edge imaging, allowing mixed contrast agent distinction. Deep-silicon is a new type of photon-counting detector with different characteristics compared with cadmium photon-counting detectors. Purpose To evaluate the performance of a prototype deep-Si PCCT scanner and compare it with that of a state-of-the-art dual-energy energy-integrating detector (EID) scanner in imaging coronary artery plaques enhanced with iodine and K-edge contrast agents. Materials and Methods A series of 10 three-dimensional-printed inserts (diameter, 3.5 mm) was prepared, and materials mimicking soft and calcified plaques were added to simulate stenosed coronary arteries. Inserts filled with an iodine- or gadolinium-based contrast agent (GBCA) were scanned. Virtual monoenergetic images (VMIs) and iodine maps were generated using two- and eight-energy bin data from EID CT and PCCT, respectively. Gadolinium maps were calculated for PCCT. The CT numbers of VMIs and iodine maps were compared. Spatial resolution and blooming artifacts were compared on the 70-keV VMIs in plaque-free and calcified coronary arteries. Results No evidence of a significant difference in the CT number of 70-keV images was found except in inserts containing GBCAs. In the absence of a GBCA, excellent (r > 0.99) agreement for iodine was found. PCCT could quantify the GBCA within 0.2 mg Gd/mL ± 0.8 accuracy of the ground truth, whereas EID CT failed to detect the GBCA. Lumen measurements were more accurate for PCCT than for EID CT, with mean errors of 167 versus 442 µm (P < .001) compared with the 3.5-mm ground truth. Conclusion Deep-Si PCCT demonstrated good accuracy in iodine quantification and could accurately decompose mixtures of two contrast agents. Its improved spatial resolution resulted in sharper images with blooming artifacts reduced by 50% compared with a state-of-the-art dual-energy EID CT scanner. © RSNA, 2024.
Asunto(s)
Medios de Contraste , Fantasmas de Imagen , Fotones , Humanos , Tomografía Computarizada por Rayos X/métodos , Vasos Coronarios/diagnóstico por imagen , Silicio , Diseño de EquipoRESUMEN
BACKGROUND. Variable beam hardening based on patient size causes variation in CT numbers for energy-integrating detector (EID) CT. Photon-counting detector (PCD) CT more accurately determines effective beam energy, potentially improving CT number reliability. OBJECTIVE. The purpose of the present study was to compare EID CT and deep silicon PCD CT in terms of both the effect of changes in object size on CT number and the overall accuracy of CT numbers. METHODS. A phantom with polyethylene rings of varying sizes (mimicking patient sizes) as well as inserts of different materials was scanned on an EID CT scanner in single-energy (SE) mode (120-kV images) and in rapid-kilovoltage-switching dual-energy (DE) mode (70-keV images) and on a prototype deep silicon PCD CT scanner (70-keV images). ROIs were placed to measure the CT numbers of the materials. Slopes of CT number as a function of object size were computed. Materials' ideal CT number at 70 keV was computed using the National Institute of Standards and Technology XCOM Photon Cross Sections Database. The root mean square error (RMSE) between measured and ideal numbers was calculated across object sizes. RESULTS. Slope (expressed as Hounsfield units per centimeter) was significantly closer to zero (i.e., less variation in CT number as a function of size) for PCD CT than for SE EID CT for air (1.2 vs 2.4 HU/cm), water (-0.3 vs -1.0 HU/cm), iodine (-1.1 vs -4.5 HU/cm), and bone (-2.5 vs -10.1 HU/cm) and for PCD CT than for DE EID CT for air (1.2 vs 2.8 HU/cm), water (-0.3 vs -1.0 HU/cm), polystyrene (-0.2 vs -0.9 HU/cm), iodine (-1.1 vs -1.9 HU/cm), and bone (-2.5 vs -6.2 HU/cm) (p < .05). For all tested materials, PCD CT had the smallest RMSE, indicating CT numbers closest to ideal numbers; specifically, RMSE (expressed as Hounsfield units) for SE EID CT, DE EID CT, and PCD CT was 32, 44, and 17 HU for air; 7, 8, and 3 HU for water; 9, 10, and 4 HU for polystyrene; 31, 37, and 13 HU for iodine; and 69, 81, and 20 HU for bone, respectively. CONCLUSION. For numerous materials, deep silicon PCD CT, in comparison with SE EID CT and DE EID CT, showed lower CT number variability as a function of size and CT numbers closer to ideal numbers. CLINICAL IMPACT. Greater reliability of CT numbers for PCD CT is important given the dependence of diagnostic pathways on CT numbers.
Asunto(s)
Yodo , Silicio , Humanos , Reproducibilidad de los Resultados , Poliestirenos , Tomografía Computarizada por Rayos X/métodos , Fantasmas de Imagen , Fotones , AguaRESUMEN
Managing and optimizing radiation dose has become a core problem for the CT community. As a fundamental step for dose optimization, accurate and computationally efficient dose estimates are crucial. The purpose of this study was to devise a computationally efficient projection-based dose metric. The absorbed energy and object mass were individually modeled using the projection data. The absorbed energy was estimated using the difference between intensity of the primary photon and the exit photon. The mass was estimated using the volume under the attenuation profile. The feasibility of the approach was evaluated across phantoms with a broad size range, various kVp settings, and two bowtie filters, using a simulation tool, the Computer Assisted Tomography SIMulator (CATSIM) software. The accuracy of projection-based dose estimation was validated against Monte Carlo (MC) simulations. The relationship between projection-based dose metric and MC dose estimate was evaluated using regression models. The projection-based dose metric showed a strong correlation with Monte Carlo dose estimates (R (2) > 0.94). The prediction errors for the projection-based dose metric were all below 15 %. This study demonstrated the feasibility of computationally efficient dose estimation requiring only the projection data.
Asunto(s)
Dosis de Radiación , Tomografía Computarizada por Rayos X , Simulación por Computador , Humanos , Método de Montecarlo , Fantasmas de Imagen , Radiometría/métodos , Programas InformáticosRESUMEN
BACKGROUND: We are interested in exploring dedicated, high-performance cardiac CT systems optimized to provide the best tradeoff between system cost, image quality, and radiation dose. OBJECTIVE: We sought to identify and evaluate a broad range of CT architectures that could provide an optimal, dedicated cardiac CT solution. METHODS: We identified and evaluated thirty candidate architectures using consistent design choices. We defined specific evaluation metrics related to cost and performance. We then scored the candidates versus the defined metrics. Lastly, we applied a weighting system to combine scores for all metrics into a single overall score for each architecture. CT experts with backgrounds in cardiovascular radiology, x-ray physics, CT hardware and CT algorithms performed the scoring and weighting. RESULTS: We found nearly a twofold difference between the most and the least promising candidate architectures. Architectures employed by contemporary commercial diagnostic CT systems were among the highest-scoring candidates. We identified six architectures that show sufficient promise to merit further in-depth analysis and comparison. CONCLUSION: Our results suggest that contemporary diagnostic CT system architectures outperform most other candidates that we evaluated, but the results for a few alternatives were relatively close. We selected six representative high-scoring candidates for more detailed design and further comparative evaluation.
Asunto(s)
Técnicas de Imagen Cardíaca/métodos , Tomografía Computarizada por Rayos X/métodos , Enfermedades Cardiovasculares/diagnóstico por imagen , HumanosRESUMEN
BACKGROUND: Photon-counting CT (PCCT) systems acquire multiple spectral measurements at high spatial resolution, providing numerous image quality benefits while also increasing the amount of data that must be transferred through the gantry slip ring. PURPOSE: This study proposes a lossy method to compress photon-counting CT data using eigenvector analysis, with the goal of providing image quality sufficient for applications that require a rapid initial reconstruction, such as to confirm anatomical coverage, scan quality, and to support automated advanced applications. The eigenbin compression method was experimentally evaluated on a clinical silicon PCCT prototype system. METHODS: The proposed eigenbin method performs principal component analysis (PCA) on a set of PCCT calibration measurements. PCA finds the orthogonal axes or eigenvectors, which capture the maximum variance in the N dimensional photon-count data space, where N is the number of acquired energy bins. To reduce the dimensionality of the PCCT data, the data are linearly transformed into a lower dimensional space spanned by the M < N eigenvectors with highest eigenvalues (i.e., the vectors that account for most of the information in the data). Only M coefficients are then transferred per measurement, which we term eigenbin values. After transmission, the original N energy-bin measurements are estimated as a linear combination of the M eigenvectors. Two versions of the eigenbin method were investigated: pixel-specific and pixel-general. The pixel-specific eigenbin method determines eigenvectors for each individual detector pixel, while the more practically realizable pixel-general eigenbin method finds one set of eigenvectors for the entire detector array. The eigenbin method was experimentally evaluated by scanning a 20 cm diameter Gammex Multienergy phantom with different material inserts on a clinical silicon-based PCCT prototype. The method was evaluated with the number of eigenbins varied between two and four. In each case, the eigenbins were used to estimate the original 8-bin data, after which material decomposition was performed. The mean, standard deviation, and contrast-to-noise ratio (CNR) of values in the reconstructed basis and virtual monoenergetic images (VMI) were compared for the original 8-bin data and for the eigenbin data. RESULTS: The pixel-specific eigenbin method reduced photon-counting CT data size by a factor of four with <5% change in mean values and a small noise penalty (mean change in noise of <12%, maximum change in noise of 20% for basis images). The pixel-general eigenbin compression method reduced data size by a factor of 2.67 with <5% change in mean values and a less than 10% noise penalty in the basis images (average noise penalty ≤5%). The noise penalty and errors were less for the VMIs than for the basis images, resulting in <5% change in CNR in the VMIs. CONCLUSION: The eigenbin compression method reduced photon-counting CT data size by a factor of two to four with less than 5% change in mean values, noise penalty of less than 10%-20%, and change in CNR ranging from 15% decrease to 24% increase. Eigenbin compression reduces the data transfer time and storage space of photon-counting CT data for applications that require rapid initial reconstructions.
RESUMEN
Background: Four-dimensional (4D) wide coverage computed tomography (CT) is an effective imaging modality for measuring the mechanical function of the myocardium. However, repeated CT measurement across a number of heartbeats is still a concern. Purpose: A projection-domain noise emulation method is presented to generate accurate low-dose (mA modulated) 4D cardiac CT scans from high-dose scans, enabling protocol optimization to deliver sufficient image quality for functional cardiac analysis while using a dose level that is as low as reasonably achievable (ALARA). Methods: Given a targeted low-dose mA modulation curve, the proposed noise emulation method injects both quantum and electronic noise of proper magnitude and correlation to the high-dose data in projection domain. A spatially varying (i.e., channel-dependent) detector gain term as well as its calibration method were proposed to further improve the noise emulation accuracy. To determine the ALARA dose threshold, a straightforward projection domain image quality (IQ) metric was proposed that is based on the number of projection rays that do not fall under the non-linear region of the detector response. Experiments were performed to validate the noise emulation method with both phantom and clinical data in terms of visual similarity, contrast-to-noise ratio (CNR), and noise-power spectrum (NPS). Results: For both phantom and clinical data, the low-dose emulated images exhibited similar noise magnitude (CNR difference within 2%), artifacts, and texture to that of the real low-dose images. The proposed channel-dependent detector gain term resulted in additional increase in emulation accuracy. Using the proposed IQ metric, recommended kVp and mA settings were calculated for low dose 4D Cardiac CT acquisitions for patients of different sizes. Conclusions: A detailed method to estimate system-dependent parameters for a raw-data based low dose emulation framework was described. The method produced realistic noise levels, artifacts, and texture with phantom and clinical studies. The proposed low-dose emulation method can be used to prospectively select patient-specific minimal-dose protocols for functional cardiac CT.
RESUMEN
BACKGROUND: Edge-on-irradiated silicon detectors are currently being investigated for use in full-body photon-counting computed tomography (CT) applications. The low atomic number of silicon leads to a significant number of incident photons being Compton scattered in the detector, depositing a part of their energy and potentially being counted multiple times. Even though the physics of Compton scatter is well established, the effects of Compton interactions in the detector on image quality for an edge-on-irradiated silicon detector have still not been thoroughly investigated. PURPOSE: To investigate and explain effects of Compton scatter on low-frequency detective quantum efficiency (DQE) for photon-counting CT using edge-on-irradiated silicon detectors. METHODS: We extend an existing Monte Carlo model of an edge-on-irradiated silicon detector with 60 mm active absorption depth, previously used to evaluate spatial-frequency-based performance, to develop projection and image domain performance metrics for pure density and pure spectral imaging tasks with 30 and 40 cm water backgrounds. We show that the lowest energy threshold of the detector can be used as an effective discriminator of primary counts and cross-talk caused by Compton scatter. We study the developed metrics as functions of the lowest threshold energy for root-mean-square electronic noise levels of 0.8, 1.6, and 3.2 keV, where the intermediate level 1.6 keV corresponds to the noise level previously measured on a single sensor element in isolation. We also compare the performance of a modeled detector with 8, 4, and 2 optimized energy bins to a detector with 1-keV-wide bins. RESULTS: In terms of low-frequency DQE for density imaging, there is a tradeoff between using a threshold low enough to capture Compton interactions and avoiding electronic noise counts. For 30 cm water phantom, 4 energy bins, and a root-mean-square electronic noise of 0.8, 1.6, and 3.2 keV, it is optimal to put the lowest energy threshold at 3, 6, and 1 keV, which gives optimal projection-domain DQEs of 0.64, 0.59, and 0.52, respectively. Low-frequency DQE for spectral imaging also benefits from measuring Compton interactions with respective optimal thresholds of 12, 12, and 13 keV. No large dependence on background thickness was observed. For the intermediate noise level (1.6 keV), increasing the lowest threshold from 5 to 35 keV increases the variance in a iodine basis image by 60%-62% (30 cm phantom) and 67%-69% (40 cm phantom), with 8 bins. Both spectral and density DQE are adversely affected by increasing the electronic noise level. Image-domain DQE exhibits similar qualitative behavior as projection-domain DQE. CONCLUSIONS: Compton interactions contribute significantly to the density imaging performance of edge-on-irradiated silicon detectors. With the studied detector topology, the benefit of counting primary Compton interactions outweighs the penalty of multiple counting at all lowest threshold energies. Compton interactions also contribute significantly to the spectral imaging performance for measured energies above 10 keV.
Asunto(s)
Método de Montecarlo , Fotones , Dispersión de Radiación , Silicio , Tomografía Computarizada por Rayos X , Silicio/química , Tomografía Computarizada por Rayos X/instrumentación , Fantasmas de ImagenRESUMEN
BACKGROUND: Four-dimensional (4D) wide coverage computed tomography (CT) is an effective imaging modality for measuring the mechanical function of the myocardium. However, repeated CT measurement across a number of heartbeats is still a concern. PURPOSE: A projection-domain noise emulation method is presented to generate accurate low-dose (mA modulated) 4D cardiac CT scans from high-dose scans, enabling protocol optimization to deliver sufficient image quality for functional cardiac analysis while using a dose level that is as low as reasonably achievable (ALARA). METHODS: Given a targeted low-dose mA modulation curve, the proposed noise emulation method injects both quantum and electronic noise of proper magnitude and correlation to the high-dose data in projection domain. A spatially varying (i.e., channel-dependent) detector gain term as well as its calibration method were proposed to further improve the noise emulation accuracy. To determine the ALARA dose threshold, a straightforward projection domain image quality (IQ) metric was proposed that is based on the number of projection rays that do not fall under the non-linear region of the detector response. Experiments were performed to validate the noise emulation method with both phantom and clinical data in terms of visual similarity, contrast-to-noise ratio (CNR), and noise-power spectrum (NPS). RESULTS: For both phantom and clinical data, the low-dose emulated images exhibited similar noise magnitude (CNR difference within 2%), artifacts, and texture to that of the real low-dose images. The proposed channel-dependent detector gain term resulted in additional increase in emulation accuracy. Using the proposed IQ metric, recommended kVp and mA settings were calculated for low dose 4D Cardiac CT acquisitions for patients of different sizes. CONCLUSIONS: A detailed method to estimate system-dependent parameters for a raw-data based low dose emulation framework was described. The method produced realistic noise levels, artifacts, and texture with phantom and clinical studies. The proposed low-dose emulation method can be used to prospectively select patient-specific minimal-dose protocols for functional cardiac CT.
Asunto(s)
Corazón , Fantasmas de Imagen , Dosis de Radiación , Relación Señal-Ruido , Humanos , Corazón/diagnóstico por imagen , Tomografía Computarizada Cuatridimensional/métodos , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
Photon counting CT (PCCT) acquires spectral measurements and enables generation of material decomposition (MD) images that provide distinct advantages in various clinical situations. However, noise amplification is observed in MD images, and denoising is typically applied. Clean or high-quality references are rare in clinical scans, often making supervised learning (Noise2Clean) impractical. Noise2Noise is a self-supervised counterpart, using noisy images and corresponding noisy references with zero-mean, independent noise. PCCT counts transmitted photons separately, and raw measurements are assumed to follow a Poisson distribution in each energy bin, providing the possibility to create noise-independent pairs. The approach is to use binomial selection to split the counts into two low-dose scans with independent noise. We prove that the reconstructed spectral images inherit the noise independence from counts domain through noise propagation analysis and also validated it in numerical simulation and experimental phantom scans. The method offers the flexibility to split measurements into desired dose levels while ensuring the reconstructed images share identical underlying features, thereby strengthening the model's robustness for input dose levels and capability of preserving fine details. In both numerical simulation and experimental phantom scans, we demonstrated that Noise2Noise with binomial selection outperforms other common self-supervised learning methods based on different presumptive conditions.
RESUMEN
BACKGROUND: Photon counting detectors (PCDs) provide higher spatial resolution, improved contrast-to-noise ratio (CNR), and energy discriminating capabilities. However, the greatly increased amount of projection data in photon counting computed tomography (PCCT) systems becomes challenging to transmit through the slip ring, process, and store. PURPOSE: This study proposes and evaluates an empirical optimization algorithm to obtain optimal energy weights for energy bin data compression. This algorithm is universally applicable to spectral imaging tasks including 2 and 3 material decomposition (MD) tasks and virtual monoenergetic images (VMIs). This method is simple to implement while preserving spectral information for the full range of object thicknesses and is applicable to different PCDs, for example, silicon detectors and CdTe detectors. METHODS: We used realistic detector energy response models to simulate the spectral response of different PCDs and an empirical calibration method to fit a semi-empirical forward model for each PCD. We numerically optimized the optimal energy weights by minimizing the average relative Cramér-Rao lower bound (CRLB) due to the energy-weighted bin compression, for MD and VMI tasks over a range of material area density ρ A , m ${\rho }_{A,m}$ (0-40 g/cm2 water, 0-2.16 g/cm2 calcium). We used Monte Carlo simulation of a step wedge phantom and an anthropomorphic head phantom to evaluate the performance of this energy bin compression method in the projection domain and image domain, respectively. RESULTS: The results show that for 2 MD, the energy bin compression method can reduce PCCT data size by 75% and 60%, with an average variance penalty of less than 17% and 3% for silicon and CdTe detectors, respectively. For 3 MD tasks with a K-edge material (iodine), this method can reduce the data size by 62.5% and 40% with an average variance penalty of less than 12% and 13% for silicon and CdTe detectors, respectively. CONCLUSIONS: We proposed an energy bin compression method that is broadly applicable to different PCCT systems and object sizes, with high data compression ratio and little loss of spectral information.
Asunto(s)
Compuestos de Cadmio , Puntos Cuánticos , Rayos X , Silicio , Telurio , Fotones , Fantasmas de ImagenRESUMEN
Purpose: Photon counting CT (PCCT) provides spectral measurements for material decomposition. However, the image noise (at a fixed dose) depends on the source spectrum. Our study investigates the potential benefits from spectral optimization using fast kV switching and filtration to reduce noise in material decomposition. Approach: The effect of the input spectra on noise performance in both two-basis material decomposition and three-basis material decomposition was compared using Cramer-Rao lower bound analysis in the projection domain and in a digital phantom study in the image domain. The fluences of different spectra were normalized using the CT dose index to maintain constant dose levels. Four detector response models based on Si or CdTe were included in the analysis. Results: For single kV scans, kV selection can be optimized based on the imaging task and object size. Furthermore, our results suggest that noise in material decomposition can be substantially reduced with fast kV switching. For two-material decomposition, fast kV switching reduces the standard deviation (SD) by â¼ 10 % . For three-material decomposition, greater noise reduction in material images was found with fast kV switching (26.2% for calcium and 25.8% for iodine, in terms of SD), which suggests that challenging tasks benefit more from the richer spectral information provided by fast kV switching. Conclusions: The performance of PCCT in material decomposition can be improved by optimizing source spectrum settings. Task-specific tube voltages can be selected for single kV scans. Also, our results demonstrate that utilizing fast kV switching can substantially reduce the noise in material decomposition for both two- and three-material decompositions, and a fixed Gd filter can further enhance such improvements for two-material decomposition.
RESUMEN
PURPOSE: We investigated spatial resolution loss away from isocenter for a prototype deep silicon photon-counting detector (PCD) CT scanner and compare with a clinical energy-integrating detector (EID) CT scanner. MATERIALS AND METHODS: We performed three scans on a wire phantom at four positions (isocenter, 6.7, 11.8, and 17.1 cm off isocenter). The acquisition modes were 120 kV EID CT, 120 kV high-definition (HD) EID CT, and 120 kV PCD CT. HD mode used double the projection view angles per rotation as the "regular" EID scan mode. The diameter of the wire was calculated by taking the full width of half max (FWHM) of a profile drawn over the radial and azimuthal directions of the wire. Change in wire diameter appearance was assessed by calculating the ratio of the radial and azimuthal diameter relative to isocenter. t tests were used to make pairwise comparisons of the wire diameter ratio with each acquisition and mean ratios' difference from unity. RESULTS: Deep silicon PCD CT had statistically smaller ( P <0.05) changes in diameter ratio for both radial and azimuthal directions compared with both regular and HD EID modes and was not statistically different from unity ( P <0.05). Maximum increases in FWMH relative to isocenter were 36%, 12%, and 1% for regular EID, HD EID, and deep silicon PCD, respectively. CONCLUSION: Deep silicon PCD CT exhibits less change in spatial resolution in both the radial and azimuthal directions compared with EID CT.
Asunto(s)
Pulmón , Fantasmas de Imagen , Fotones , Silicio , Tomografía Computarizada por Rayos X , Tomografía Computarizada por Rayos X/métodos , Humanos , Pulmón/diagnóstico por imagenRESUMEN
BACKGROUND: Recent improvements in CT detector technology have led to smaller detector pixels resolving frequencies beyond 20 lp/cm and enabled ultra-high-resolution CT. Silicon-based photon-counting detector (PCD) CT is one such technology that promises improved spatial and spectral resolution. However, when the detector pixel sizes are reduced, the impact of cardiac motion on CT images becomes more pronounced. Here, we investigated the effects cardiac motion on the image quality of a clinical prototype Si-PCD scanner in a dynamic heart phantom. METHODS: A series of 3D-printed vessels were created to simulate coronary arteries with diameter in the 1-3.5 âmm range. Four coronary stents were set inside the d â= â3.5 âmm vessels and all vessels were filled with contrast agents and were placed inside a dynamic cardiac phantom. The phantom was scanned in motion (60 bpm) and at rest on a prototype clinical Si-PCD CT scanner in 8-bin spectral UHR mode. Virtual monoenergetic images (VMI) were generated at 70 âkeV and CT number accuracy and effective spatial resolution (blooming) of rest and motion VMIs were compared. RESULTS: Linear regression analysis of CT numbers showed excellent agreement (r â> â0.99) between rest and motion. We did not observe a significant difference (p â> â0.48) in estimating free lumen diameters. Differences in in-stent lumen diameter and stent strut thickness were non-significant with maximum mean difference of approximately 70 âµm. CONCLUSION: We found no significant degradation in CT number accuracy or spatial resolution due to cardiac motion. The results demonstrate the potential of spectral UHR coronary CT angiography enabled by Si-PCD.
Asunto(s)
Angiografía por Tomografía Computarizada , Silicio , Humanos , Angiografía por Tomografía Computarizada/métodos , Valor Predictivo de las Pruebas , Tomografía Computarizada por Rayos X/métodos , Angiografía Coronaria/métodos , Fantasmas de ImagenRESUMEN
PURPOSE: In this report, the authors introduce the general concept of the completeness map, as a means to evaluate the completeness of data acquired by a given CT system design (architecture and scan mode). They illustrate the utility of completeness map by applying the completeness map concept to a number of candidate CT system designs, as part of a study to advance the state-of-the-art in cardiac CT. METHODS: In order to optimally reconstruct a point within a volume of interest (VOI), the Radon transform on all possible planes through that point should be measured. The authors quantified the extent to which this ideal condition is satisfied for the entire image volume. They first determined a Radon completeness number for each point in the VOI, as the percentage of possible planes that is actually measured. A completeness map is then defined as a 3D matrix of the completeness numbers for the entire VOI. The authors proposed algorithms to analyze the projection datasets in Radon space and compute the completeness number for a fixed point and apply these algorithms to various architectures and scan modes that they are evaluating. In this report, the authors consider four selected candidate architectures, operating with different scan modes, for a total of five system design alternatives. Each of these alternatives is evaluated using completeness map. RESULTS: If the detector size and cone angle are large enough to cover the entire cardiac VOI, a single-source circular scan can have ≥99% completeness over the entire VOI. However, only the central z-slice can be exactly reconstructed, which corresponds to 100% completeness. For a typical single-source architecture, if the detector is limited to an axial dimension of 40 mm, a helical scan needs about five rotations to form an exact reconstruction region covering the cardiac VOI, while a triple-source helical scan only requires two rotations, leading to a 2.5x improvement in temporal resolution. If the source and detector of an inverse-geometry (IGCT) system have the same axial extent, and the spacing of source points in the axial and transaxial directions is sufficiently small, the IGCT can also form an exact reconstruction region for the cardiac VOI. If the VOI can be covered by the x-ray beam in any view, a composite-circling scan can generate an exact reconstruction region covering the VOI. CONCLUSIONS: The completeness map evaluation provides useful information for selecting the next-generation cardiac CT system design. The proposed completeness map method provides a practical tool for analyzing complex scanning trajectories, where the theoretical image quality for some complex system designs is impossible to predict, without yet-undeveloped reconstruction algorithms.
Asunto(s)
Corazón/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Diseño de Equipo , Tomografía Computarizada por Rayos X/instrumentaciónRESUMEN
BACKGROUND: All photon counting detectors have a characteristic count rate over which their performance degrades. Degradation in the clinical setting takes the form of increased noise, reduced material quantification accuracy, and image artifacts. Count rate is a function of patient attenuation, beam filtration, scanner geometry, and X-ray technique. PURPOSE: To guide protocol and technology development in the photon counting space, knowledge of clinical count rates spanning the complete range of clinical indications and patient sizes is needed. In this paper, we use clinical data to characterize the range of computed tomography (CT) count rates. METHODS: We retrospectively gathered 1980 patient exams spanning the entire body (head/neck/chest/abdomen/extremity) and sampled 36 951 axial image slices. We assigned the tissue labels air/lung/fat/soft tissue/bone to each voxel for each slice using CT number thresholds. We then modeled four different bowtie filters, 70/80/100/120/140 kV spectra, and a range of mA values. We forward-projected each slice to obtain detector-incident count rates, using the geometry of a GE Revolution Apex scanner. Our analysis divided the detector into thirds: the central one-third, one-third of the detector split into two equal regions adjacent to the central third, and the final one-third divided equally between the outer detector edges. We report the 99th percentile of counts to mimic the upper limits of count rates making passing through a patient as a function of patient water equivalent diameter. We also report the percentage of patient scans, by body region, over different count rate thresholds for all combinations of bowtie and beam energy. RESULTS: For routine exam types, we recorded count rates of approximately 3.5 × 108 counts/mm2 /s in the torso, extremities, and brain. For neck scans, we observed count rates near 6 × 108 counts/mm2 /s. Our simulations of 1000 mA, appropriately mimicking the mA needs for fast pediatric, fast thoracic, and cardiac scanning, resulted in count rates of over 10 × 108 counts/mm2 /s for the torso, extremities, and brain. At 1000 mA, for the neck region, we observed count rates close to 2 × 109 counts/mm2 /s. Importantly, we saw only a small change in maximum count rate needs over patient size, which we attribute to patient mis-positioning with respect to the bowtie filters. As expected, combinations of kV and bowtie filter with higher beam energies and wider/less attenuating bowtie fluence profiles lead to higher count rates relative to lower energies. The 99th-50th percentile count rate changed the most for the torso region, with a maximum variation of 3.9 × 108 to 1.2 × 107 counts/mm2 /s. The head/neck/extremity regions had less than a 50% change in count rate from the 99th to 50th percentiles. CONCLUSIONS: Our results are the first to use a large patient cohort spanning all body regions to characterize count rates in CT. Our results should be useful in helping researchers understand count rates as a function of body region and mA for various combinations of bowtie filter designs and beam energies. Our results indicate clinical rates >1 × 109 counts/mm2 /s, but they do not predict the image quality impact of using a detector with lower characteristic count rates.
Asunto(s)
Cabeza , Tomografía Computarizada por Rayos X , Humanos , Niño , Estudios Retrospectivos , Tomografía Computarizada por Rayos X/métodos , Encéfalo , Cintigrafía , Fantasmas de ImagenRESUMEN
PURPOSE: Standard four-dimensional computed tomography (4DCT) cardiac reconstructions typically include spiraling artifacts that depend not only on the motion of the heart but also on the gantry angle range over which the data was acquired. We seek to reduce these motion artifacts and, thereby, improve the accuracy of left ventricular wall positions in 4DCT image series. METHODS: We use a motion artifact reduction approach (ResyncCT) that is based largely on conjugate pairs of partial angle reconstruction (PAR) images. After identifying the key locations where motion artifacts exist in the uncorrected images, paired subvolumes within the PAR images are analyzed with a modified cross-correlation function in order to estimate 3D velocity and acceleration vectors at these locations. A subsequent motion compensation process (also based on PAR images) includes the creation of a dense motion field, followed by a backproject-and-warp style compensation. The algorithm was tested on a 3D printed phantom, which represents the left ventricle (LV) and on challenging clinical cases corrupted by severe artifacts. RESULTS: The results from our preliminary phantom test as well as from clinical cardiac scans show crisp endocardial edges and resolved double-wall artifacts. When viewed as a temporal series, the corrected images exhibit a much smoother motion of the LV endocardial boundary as compared to the uncorrected images. In addition, quantitative results from our phantom studies show that ResyncCT processing reduces endocardial surface distance errors from 0.9 ± 0.8 to 0.2 ± 0.1 mm. CONCLUSIONS: The ResyncCT algorithm was shown to be effective in reducing motion artifacts and restoring accurate wall positions. Some perspectives on the use of conjugate-PAR images and on techniques for CT motion artifact reduction more generally are also given.
Asunto(s)
Artefactos , Tomografía Computarizada Cuatridimensional , Algoritmos , Tomografía Computarizada Cuatridimensional/métodos , Ventrículos Cardíacos/diagnóstico por imagen , Movimiento (Física) , Fantasmas de ImagenRESUMEN
High-energy X-ray computed tomography (CT) systems have been recently used to produce high-resolution images in various nondestructive testing and evaluation (NDT/NDE) applications. The accuracy of the dimensional information extracted from CT images is rapidly approaching the accuracy achieved with a coordinate measuring machine (CMM), the conventional approach to acquire the metrology information directly. On the other hand, CT systems generate the sinogram which is transformed mathematically to the pixel-based images. The dimensional information of the scanned object is extracted later by performing edge detection on reconstructed CT images. The dimensional accuracy of this approach is limited by the grid size of the pixel-based representation of CT images since the edge detection is performed on the pixel grid. Moreover, reconstructed CT images usually display various artifacts due to the underlying physical process and resulting object boundaries from the edge detection fail to represent the true boundaries of the scanned object. In this paper, a novel algorithm to reconstruct the boundaries of an object with uniform material composition and uniform density is presented. There are three major benefits in the proposed approach. First, since the boundary parameters are reconstructed instead of image pixels, the complexity of the reconstruction algorithm is significantly reduced. The iterative approach, which can be computationally intensive, will be practical with the parametric boundary reconstruction. Second, the object of interest in metrology can be represented more directly and accurately by the boundary parameters instead of the image pixels. By eliminating the extra edge detection step, the overall dimensional accuracy and process time can be improved. Third, since the parametric reconstruction approach shares the boundary representation with other conventional metrology modalities such as CMM, boundary information from other modalities can be directly incorporated as prior knowledge to improve the convergence of an iterative approach. In this paper, the feasibility of parametric boundary reconstruction algorithm is demonstrated with both simple and complex simulated objects. Finally, the proposed algorithm is applied to the experimental industrial CT system data.
Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Simulación por Computador , Análisis de los Mínimos Cuadrados , Modelos Lineales , Modelos Teóricos , Rayos XRESUMEN
PURPOSE: Our goal is to develop a model-based approach for CT dose estimation. We previously presented a CT dose estimation method that offered good accuracy in soft tissue regions but lower accuracy in bone regions. In this work, we propose an improved physic-based approach to achieve high accuracy for any materials and realistic clinical anatomies. METHODS: Like Monte Carlo techniques, we start from a model or image of the patient and we model all relevant x-ray interaction processes. Unlike Monte Carlo techniques, we do not track each individual photon, but we compute the average behavior of the x-ray interactions, combining pencil-beam calculations for the first-order interactions and kernels for the higher order interactions. The new algorithm more accurately models the variation of materials in the human body, especially for higher attenuation materials such as bone, as well as the various x-ray attenuation processes. We performed validation experiments with analytic phantoms and a polychromatic x-ray spectrum, comparing to Monte Carlo simulation (GEANT4) as the ground truth. RESULTS: The results show that the proposed method has improved accuracy in both soft tissue region and bone region: less than 6% voxel-wise errors and less than 3.2% ROI-based errors in an anthropomorphic phantom. The computational cost is on the order of a low-resolution filtered backprojection reconstruction. CONCLUSIONS: We introduced improved physics-based models in a fast CT dose reconstruction approach. The improved approach demonstrated quantitatively good correspondence to a Monte Carlo gold standard in both soft tissue and bone regions in a chest phantom with a realistic polychromatic spectrum and could potentially be used for real-time applications such as patient- and organ-specific scan planning and organ dose reporting.
Asunto(s)
Algoritmos , Fantasmas de Imagen , Tomografía Computarizada por Rayos X , Humanos , Método de Montecarlo , FotonesRESUMEN
PURPOSE: This paper presents an overview of multisource inverse-geometry computed tomography (IGCT) as well as the development of a gantry-based research prototype system. The development of the distributed x-ray source is covered in a companion paper [V. B. Neculaes et al., "Multisource inverse-geometry CT. Part II. X-ray source design and prototype," Med. Phys. 43, 4617-4627 (2016)]. While progress updates of this development have been presented at conferences and in journal papers, this paper is the first comprehensive overview of the multisource inverse-geometry CT concept and prototype. The authors also provide a review of all previous IGCT related publications. METHODS: The authors designed and implemented a gantry-based 32-source IGCT scanner with 22 cm field-of-view, 16 cm z-coverage, 1 s rotation time, 1.09 × 1.024 mm detector cell size, as low as 0.4 × 0.8 mm focal spot size and 80-140 kVp x-ray source voltage. The system is built using commercially available CT components and a custom made distributed x-ray source. The authors developed dedicated controls, calibrations, and reconstruction algorithms and evaluated the system performance using phantoms and small animals. RESULTS: The authors performed IGCT system experiments and demonstrated tube current up to 125 mA with up to 32 focal spots. The authors measured a spatial resolution of 13 lp/cm at 5% cutoff. The scatter-to-primary ratio is estimated 62% for a 32 cm water phantom at 140 kVp. The authors scanned several phantoms and small animals. The initial images have relatively high noise due to the low x-ray flux levels but minimal artifacts. CONCLUSIONS: IGCT has unique benefits in terms of dose-efficiency and cone-beam artifacts, but comes with challenges in terms of scattered radiation and x-ray flux limits. To the authors' knowledge, their prototype is the first gantry-based IGCT scanner. The authors summarized the design and implementation of the scanner and the authors presented results with phantoms and small animals.
Asunto(s)
Tomografía Computarizada por Rayos X/instrumentación , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Animales , Artefactos , Calibración , Diseño de Equipo , Fantasmas de Imagen , Polimetil Metacrilato , Conejos , Dosis de Radiación , Ratas , Dispersión de Radiación , Tomógrafos Computarizados por Rayos X , Agua , Rayos XRESUMEN
Parsing volumetric computed tomography (CT) into 10 or more salient organs simultaneously is a challenging task with many applications such as personalized scan planning and dose reporting. In the clinic, pre-scan data can come in the form of very low dose volumes acquired just prior to the primary scan or from an existing primary scan. To localize organs in such diverse data, we propose a new learning based framework that we call hierarchical pictorial structures (HPS) which builds multiple levels of models in a tree-like hierarchy that mirrors the natural decomposition of human anatomy from gross structures to finer structures. Each node of our hierarchical model learns (1) the local appearance and shape of structures, and (2) a generative global model that learns probabilistic, structural arrangement. Our main contribution is two fold. First we embed the pictorial structures approach in a hierarchical framework which reduces test time image interpretation and allows for the incorporation of additional geometric constraints that robustly guide model fitting in the presence of noise. Second we guide our HPS framework with the probabilistic cost maps extracted using random decision forests using volumetric 3D HOG features which makes our model fast to train and fast to apply to novel test data and posses a high degree of invariance to shape distortion and imaging artifacts. All steps require approximate 3 mins to compute and all organs are located with suitably high accuracy for our clinical applications such as personalized scan planning for radiation dose reduction. We assess our method using a database of volumetric CT scans from 81 subjects with widely varying age and pathology and with simulated ultra low dose cadaver pre-scan data.