RESUMO
PURPOSE: To investigate the utility and generalizability of deep learning subtraction angiography (DLSA) for generating synthetic digital subtraction angiography (DSA) images without misalignment artifacts. MATERIALS AND METHODS: DSA images and native digital angiograms of the cerebral, hepatic, and splenic vasculature, both with and without motion artifacts, were retrospectively collected. Images were divided into a motion-free training set (n = 66 patients, 9,161 images) and a motion artifact-containing test set (n = 22 patients, 3,322 images). Using the motion-free set, the deep neural network pix2pix was trained to produce synthetic DSA images without misalignment artifacts directly from native digital angiograms. After training, the algorithm was tested on digital angiograms of hepatic and splenic vasculature with substantial motion. Four board-certified radiologists evaluated performance via visual assessment using a 5-grade Likert scale. Subgroup analyses were performed to analyze the impact of transfer learning and generalizability to novel vasculature. RESULTS: Compared with the traditional DSA method, the proposed approach was found to generate synthetic DSA images with significantly fewer background artifacts (a mean rating of 1.9 [95% CI, 1.1-2.6] vs 3.5 [3.5-4.4]; P = .01) without a significant difference in foreground vascular detail (mean rating of 3.1 [2.6-3.5] vs 3.3 [2.8-3.8], P = .19) in both the hepatic and splenic vasculature. Transfer learning significantly improved the quality of generated images (P < .001). CONCLUSIONS: DLSA successfully generates synthetic angiograms without misalignment artifacts, is improved through transfer learning, and generalizes reliably to novel vasculature that was not included in the training data.
Assuntos
Aprendizado Profundo , Humanos , Estudos Retrospectivos , Angiografia Digital/métodos , Fígado , ArtefatosRESUMO
Positron emission tomography (PET) images are typically reconstructed with an in-plane pixel size of approximately 4mm for cancer imaging. The objective of this work was to evaluate the effect of using smaller pixels on general oncologic lesion-detection. A series of observer studies was performed using experimental phantom data from the Utah PET Lesion Detection Database, which modeled whole-body FDG PET cancer imaging of a 92kg patient. The data comprised 24 scans over 4 days on a Biograph mCT time-of-flight (TOF) PET/CT scanner, with up to 23 lesions (diam. 6-16mm) distributed throughout the phantom each day. Images were reconstructed with 2.036mm and 4.073mm pixels using ordered-subsets expectation-maximization (OSEM) both with and without point spread function (PSF) modeling and TOF. Detection performance was assessed using the channelized non-prewhitened numerical observer with localization receiver operating characteristic (LROC) analysis. Tumor localization performance and the area under the LROC curve were then analyzed as functions of the pixel size. In all cases, the images with ~2mm pixels provided higher detection performance than those with ~4mm pixels. The degree of improvement from the smaller pixels was larger than that offered by PSF modeling for these data, and provided roughly half the benefit of using TOF. Key results were confirmed by two human observers, who read subsets of the test data. This study suggests that a significant improvement in tumor detection performance for PET can be attained by using smaller voxel sizes than commonly used at many centers. The primary drawback is a 4-fold increase in reconstruction time and data storage requirements.
RESUMO
Task-based assessments of image quality constitute a rigorous, principled approach to the evaluation of imaging system performance. To conduct such assessments, it has been recognized that mathematical model observers are very useful, particularly for purposes of imaging system development and optimization. One type of model observer that has been widely applied in the medical imaging community is the channelized Hotelling observer (CHO). Since estimates of CHO performance typically include statistical variability, it is important to control and limit this variability to maximize the statistical power of image-quality studies. In a previous paper, we demonstrated that by including prior knowledge of the image class means, a large decrease in the bias and variance of CHO performance estimates can be realized. The purpose of the present work is to present refinements and extensions of the estimation theory given in our previous paper, which was limited to point estimation with equal numbers of images from each class. Specifically, we present and characterize minimum-variance unbiased point estimators for observer signal-to-noise ratio (SNR) that allow for unequal numbers of lesion-absent and lesion-present images. Building on this SNR point estimation theory, we then show that confidence intervals with exactly-known coverage probabilities can be constructed for commonly-used CHO performance measures. Moreover, we propose simple, approximate confidence intervals for CHO performance, and we show that they are well-behaved in most scenarios of interest.
RESUMO
Deep-learning (DL) based CT image generation methods are often evaluated using RMSE and SSIM. By contrast, conventional model-based image reconstruction (MBIR) methods are often evaluated using image properties such as resolution, noise, bias. Calculating such image properties requires time consuming Monte Carlo (MC) simulations. For MBIR, linearized analysis using first order Taylor expansion has been developed to characterize noise and resolution without MC simulations. This inspired us to investigate if linearization can be applied to DL networks to enable efficient characterization of resolution and noise. We used FBPConvNet as an example DL network and performed extensive numerical evaluations, including both computer simulations and real CT data. Our results showed that network linearization works well under normal exposure settings. For such applications, linearization can characterize image noise and resolutions without running MC simulations. We provide with this work the computational tools to implement network linearization. The efficiency and ease of implementation of network linearization can hopefully popularize the physics-related image quality measures for DL applications. Our methodology is general; it allows flexible compositions of DL nonlinear modules and linear operators such as filtered-backprojection (FBP). For the latter, we develop a generic method for computing the covariance images that is needed for network linearization.
Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Tomografia Computadorizada por Raios X/métodos , Doses de Radiação , Simulação por Computador , Processamento de Imagem Assistida por Computador/métodos , Razão Sinal-RuídoRESUMO
PURPOSE: Dedicated breast CT prototypes used in clinical investigations utilize single circular source trajectory and cone-beam geometry with flat-panel detectors that do not satisfy data-sufficiency conditions and could lead to cone beam artifacts. Hence, this work investigated the glandular dose characteristics of a circle-plus-line trajectory that fulfills data-sufficiency conditions for image reconstruction in dedicated breast CT. METHODS: Monte Carlo-based computer simulations were performed using the GEANT4 toolkit and was validated with previously reported normalized glandular dose coefficients for one prototype breast CT system. Upon validation, Monte Carlo simulations were performed to determine the normalized glandular dose coefficients as a function of x-ray source position along the line scan. The source-to-axis of rotation distance and the source-to-detector distance were maintained constant at 65 and 100 cm, respectively, in all simulations. The ratio of the normalized glandular dose coefficient at each source position along the line scan to that for the circular scan, defined as relative normalized glandular dose coefficient (RD(g)N), was studied by varying the diameter of the breast at the chest wall, chest-wall to nipple distance, skin thickness, x-ray beam energy, and glandular fraction of the breast. RESULTS: The RD(g)N metric when stated as a function of source position along the line scan, relative to the maximum length of line scan needed for data sufficiency, was found to be minimally dependent on breast diameter, chest-wall to nipple distance, skin thickness, glandular fraction, and x-ray photon energy. This observation facilitates easy estimation of the average glandular dose of the line scan. Polynomial fit equations for computing the RD(g)N and hence the average glandular dose are provided. CONCLUSIONS: For a breast CT system that acquires 300-500 projections over 2π for the circular scan, the addition of a line trajectory with equal source spacing and constant x-ray beam quality (kVp and HVL) and mAs matched to the circular scan, will result in less than 0.18% increase in average glandular dose to the breast per projection along the line scan.
Assuntos
Mamografia/métodos , Doses de Radiação , Método de Monte Carlo , Reprodutibilidade dos TestesRESUMO
This paper is motivated by the problem of image-quality assessment using model observers for the purpose of development and optimization of medical imaging systems. Specifically, we present a study regarding the estimation of the receiver operating characteristic (ROC) curve for the observer and associated summary measures. This study evaluates the statistical advantage that may be gained in ROC estimates of observer performance by assuming that the difference of the class means for the observer ratings is known. Such knowledge is frequently available in image-quality studies employing known-location lesion detection tasks together with linear model observers. The study is carried out by introducing parametric point and confidence interval estimators that incorporate a known difference of class means. An evaluation of the new estimators for the area under the ROC curve establishes that a large reduction in statistical variability can be achieved through incorporation of knowledge of the difference of class means. Namely, the mean 95% AUC confidence interval length can be as much as seven times smaller in some cases. We also examine how knowledge of the difference of class means can be advantageously used to compare the areas under two correlated ROC curves, and observe similar gains.
RESUMO
The past decade has seen the rapid growth of model based image reconstruction (MBIR) algorithms, which are often applications or adaptations of convex optimization algorithms from the optimization community. We review some state-of-the-art algorithms that have enjoyed wide popularity in medical image reconstruction, emphasize known connections between different algorithms, and discuss practical issues such as computation and memory cost. More recently, deep learning (DL) has forayed into medical imaging, where the latest development tries to exploit the synergy between DL and MBIR to elevate the MBIR's performance. We present existing approaches and emerging trends in DL-enhanced MBIR methods, with particular attention to the underlying role of convexity and convex algorithms on network architecture. We also discuss how convexity can be employed to improve the generalizability and representation power of DL networks in general.
Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Algoritmos , Inteligência Artificial , Encéfalo , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodosRESUMO
We are interested in learning the hyperparameters in a convex objective function in a supervised setting. The complex relationship between the input data to the convex problem and the desirable hyperparameters can be modeled by a neural network; the hyperparameters and the data then drive the convex minimization problem, whose solution is then compared to training labels. In our previous work (Xu and Noo 2021Phys. Med. Biol.6619NT01), we evaluated a prototype of this learning strategy in an optimization-based sinogram smoothing plus FBP reconstruction framework. A question arising in this setting is how to efficiently compute (backpropagate) the gradient from the solution of the optimization problem, to the hyperparameters to enable end-to-end training. In this work, we first develop general formulas for gradient backpropagation for a subset of convex problems, namely the proximal mapping. To illustrate the value of the general formulas and to demonstrate how to use them, we consider the specific instance of 1D quadratic smoothing (denoising) whose solution admits a dynamic programming (DP) algorithm. The general formulas lead to another DP algorithm for exact computation of the gradient of the hyperparameters. Our numerical studies demonstrate a 55%-65% computation time savings by providing a custom gradient instead of relying on automatic differentiation in deep learning libraries. While our discussion focuses on 1D quadratic smoothing, our initial results (not presented) support the statement that the general formulas and the computational strategy apply equally well to TV or Huber smoothing problems on simple graphs whose solutions can be computed exactly via DP.
Assuntos
Algoritmos , Redes Neurais de ComputaçãoRESUMO
BACKGROUND: Various clinical studies show the potential for a wider quantitative role of diagnostic X-ray computed tomography (CT) beyond size measurements. Currently, the clinical use of attenuation values is, however, limited due to their lack of robustness. This issue can be observed even on the same scanner across patient size and positioning. There are different causes for the lack of robustness in the attenuation values; one possible source of error is beam hardening of the X-ray source spectrum. The conventional and well-established approach to address this issue is a calibration-based single material beam hardening correction (BHC) using a water cylinder. PURPOSE: We investigate an alternative approach for single-material BHC with the aim of producing a more robust result for the attenuation values. The underlying hypothesis of this investigation is that calibration-based BHC automatically corrects for scattered radiation in a manner that is suboptimal in terms of bias as soon as the scanned object strongly deviates from the water cylinder used for calibration. METHODS: The approach we propose performs BHC via an analytical energy response model that is embedded into a correction pipeline that efficiently estimates and subtracts scattered radiation in a patient-specific manner prior to BHC. The estimation of scattered radiation is based on minimizing, in average, the squared difference between our corrected data and the vendor-calibrated data. The used energy response model is considering the spectral effects of the detector response and the prefiltration of the source spectrum, including a beam-shaping bowtie filter. The performance of the correction pipeline is first characterized with computer simulated data. Afterward, it is tested using real 3-D CT data sets of two different phantoms, with various kV settings and phantom positions, assuming a circular data acquisition. The results are compared in the image domain to those from the scanner. RESULTS: For experiments with a water cylinder, the proposed correction pipeline leads to similar results as the vendor. For reconstructions of a QRM liver phantom with extension ring, the proposed correction pipeline achieved a more uniform and stable outcome in the attenuation values of homogeneous materials within the phantom. For example, the root mean squared deviation between centered and off-centered phantom positioning was reduced from 6.6 to 1.8 HU in one profile. CONCLUSIONS: We have introduced a patient-specific approach for single-material BHC in diagnostic CT via the use of an analytical energy response model. This approach shows promising improvements in terms of robustness of attenuation values for large patient sizes. Our results contribute toward improving CT images so as to make CT attenuation values more reliable for use in clinical practice.
Assuntos
Tomografia Computadorizada por Raios X , Água , Algoritmos , Calibragem , Humanos , Imagens de Fantasmas , Tomografia Computadorizada por Raios X/métodos , Raios XRESUMO
Purpose: For 50 years now, SPIE Medical Imaging (MI) conferences have been the premier forum for disseminating and sharing new ideas, technologies, and concepts on the physics of MI. Approach: Our overarching objective is to demonstrate and highlight the major trajectories of imaging physics and how they are informed by the community and science present and presented at SPIE MI conferences from its inception to now. Results: These contributions range from the development of image science, image quality metrology, and image reconstruction to digital x-ray detectors that have revolutionized MI modalities including radiography, mammography, fluoroscopy, tomosynthesis, and computed tomography (CT). Recent advances in detector technology such as photon-counting detectors continue to enable new capabilities in MI. Conclusion: As we celebrate the past 50 years, we are also excited about what the next 50 years of SPIE MI will bring to the physics of MI.
RESUMO
We propose a hyperparameter learning framework that learnspatient-specifichyperparameters for optimization-based image reconstruction problems for x-ray CT applications. The framework consists of two functional modules: (1) a hyperparameter learning module parameterized by a convolutional neural network, (2) an image reconstruction module that takes as inputs both the noisy sinogram and the hyperparameters from (1) and generates the reconstructed images. As a proof-of-concept study, in this work we focus on a subclass of optimization-based image reconstruction problems with exactly computable solutions so that the whole network can be trained end-to-end in an efficient manner. Unlike existing hyperparameter learning methods, our proposed framework generates patient-specific hyperparameters from the sinogram of the same patient. Numerical studies demonstrate the effectiveness of our proposed approach compared to bi-level optimization.
Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Raios XRESUMO
We present a theoretically-exact and stable computed tomography (CT) reconstruction algorithm that is capable of handling interrupted illumination and therefore of using all measured data at arbitrary pitch. This algorithm is based on a differentiated backprojection (DBP) on M-lines. First, we discuss the problem of interrupted illumination and how it affects the DBP. Then we show that it is possible to take advantage of some properties of the DBP to compensate for the effects of interrupted illumination in a mathematically exact way. From there, we have developed an efficient algorithm which we have successfully implemented. We show encouraging preliminary results using both computer-simulated data and real data. Our results show that our method is capable of achieving a substantial reduction of image noise when decreasing the helix pitch compared with the maximum pitch case. We conclude that the proposed algorithm defines for the first time a theoretically-exact and stable reconstruction method that is capable of beneficially using all measured data at arbitrary pitch.
RESUMO
A direct filtered-backprojection (FBP) reconstruction algorithm is presented for circular cone-beam computed tomography (CB-CT) that allows the filter operation to be applied efficiently with shift-variant band-pass characteristics on the kernel function. Our algorithm is derived from the ramp-filter based FBP method of Feldkamp et al. and obtained by decomposing the ramp filtering into a convolution involving the Hilbert kernel (global operation) and a subsequent differentiation operation (local operation). The differentiation is implemented as a finite difference of two (Hilbert filtered) data samples and carried out as part of the backprojection step. The spacing between the two samples, which defines the low-pass characteristics of the filter operation, can thus be selected individually for each point in the image volume. We here define the sample spacing to follow the magnification of the divergent-beam geometry and thus obtain a novel, depth-dependent filtering algorithm for circular CB-CT. We evaluate this resulting algorithm using computer-simulated CB data and demonstrate that our algorithm yields results where spatial resolution and image noise are distributed much more uniformly over the field-of-view, compared to Feldkamp's approach.
RESUMO
For situations of cone-beam scanning where the measurements are incomplete, we propose a method to quantify the severity of the missing information at each voxel. This incompleteness metric is geometric; it uses only the relative locations of all cone-beam vertices with respect to the voxel in question, and does not apply global information such as the object extent or the pattern of incompleteness of other voxels. The values are non-negative, with zero indicating "least incompleteness," i.e. minimal danger of incompleteness artifacts. The incompleteness value can be related to the severity of the potential reconstruction artifact at the voxel location, independent of reconstruction algorithm. We performed a computer simulation of x-ray sources along a circular trajectory, and used small multi-disk test-objects to examine the local effects of data incompleteness. The observed behavior of the reconstructed test-objects quantitatively matched the precalculated incompleteness values. A second simulation of a hypothetical SPECT breast imaging system used only 12 pinholes. Reconstructions were performed using analytic and iterative methods, and five reconstructed test-objects matched the behavior predicted by the incompleteness model. The model is based on known sufficiency conditions for data incompleteness, and provides strong predictive guidance for what can go wrong with incomplete cone-beam data.
RESUMO
Joint image reconstruction for multiphase CT can potentially improve image quality and reduce dose by leveraging the shared information among the phases. Multiphase CT scans are acquired sequentially. Inter-scan patient breathing causes small organ shifts and organ boundary misalignment among different phases. Existing multi-channel regularizers such as the joint total variation (TV) can introduce artifacts at misaligned organ boundaries. We propose a multi-channel regularizer using the infimal convolution (inf-conv) between a joint TV and a separable TV. It is robust against organ misalignment; it can work like a joint TV or a separable TV depending on a parameter setting. The effects of the parameter in the inf-conv regularizer are analyzed in detail. The properties of the inf-conv regularizer are then investigated numerically in a multi-channel image denoising setting. For algorithm implementation, the inf-conv regularizer is nonsmooth; inverse problems with the inf-conv regularizer can be solved using a number of primal-dual algorithms from nonsmooth convex minimization. Our numerical studies using synthesized 2-phase patient data and phantom data demonstrate that the inf-conv regularizer can largely maintain the advantages of the joint TV over the separable TV and reduce image artifacts of the joint TV due to organ misalignment.
Assuntos
Artefatos , Processamento de Imagem Assistida por Computador , Algoritmos , Humanos , Imagens de Fantasmas , Tomografia Computadorizada por Raios XRESUMO
Three-dimensional cone-beam imaging has become valuable in interventional radiology. Currently, this tool, referred to as C-arm CT, employs a circular short-scan for data acquisition, which limits the axial volume coverage and yields unavoidable cone-beam artifacts. To improve flexibility in axial coverage and image quality, there is a critical need for novel data acquisition geometries and related image reconstruction algorithms. For this purpose, we previously introduced the extended line-ellipse-line trajectory, which allows complete scanning of arbitrary volume lengths in the axial direction together with adjustable axial beam collimation, from narrow to wide depending on the targeted application. A first implementation of this trajectory on a state-of-the-art robotic angiography system is reported here. More specifically, an assessment of the quality of this first implementation is presented. The assessment is in terms of geometric fidelity and repeatability, complemented with a first visual inspection of how well the implementation enables imaging an anthropomorphic head phantom. The geometric fidelity analysis shows that the ideal trajectory is closely emulated, with only minor deviations that have no impact on data completeness and clinical practicality. Also, mean backprojection errors over short-term repetitions are shown to be below the detector pixel size at field-of-view center for most views, which indicates repeatability is satisfactory for clinical utilization. These repeatability observations are further supported by values of the Structural Similarity Index Metric above 94% for reconstructions of the FORBILD head phantom from computer-simulated data based on repeated data acquisition geometries. Last, the real data experiment with the anthropomorphic head phantom shows that the high contrast features of the phantom are well reconstructed without distortions as well as without breaks or other disturbing transition zones, which was not obvious given the complexity of the data acquisition geometry and the major variations in axial coverage that occur over the scan.
Assuntos
Angiografia por Tomografia Computadorizada/instrumentação , Robótica , Algoritmos , Artefatos , Cabeça/irrigação sanguínea , Cabeça/diagnóstico por imagem , Processamento de Imagem Assistida por Computador , Imagens de FantasmasRESUMO
Large field of view cone-beam computed tomography (CBCT) is being achieved using circular source and detector trajectories. These circular trajectories are known to collect insufficient data for accurate image reconstruction. Although various descriptions of the missing information exist, the manifestation of this lack of data in reconstructed images is generally nonintuitive. One model predicts that the missing information corresponds to a shift-variant cone of missing frequency components. This description implies that artifacts depend on the imaging geometry, as well as the frequency content of the imaged object. In particular, objects with a large proportion of energy distributed over frequency bands that coincide with the missing cone will be most compromised. These predictions were experimentally verified by imaging small, localized objects (acrylic spheres, stacked disks) at varying positions in the object space and observing the frequency spectrums of the reconstructions. Measurements of the internal angle of the missing cone agreed well with theory, indicating a right circular cone for points on the rotation axis, and an oblique, circular cone elsewhere. In the former case, the largest internal angle with respect to the vertical axis corresponds to the (half) cone angle of the CBCT system (typically approximately 5 degrees - 7.5 degrees in IGRT). Object recovery was also found to be strongly dependent on the distribution of the object's frequency spectrum relative to the missing cone, as expected. The observed artifacts were also reproducible via removal of local frequency components, further supporting the theoretical model. Larger objects with differing internal structures (cellular polyurethane, solid acrylic) were also imaged and interpreted with respect to the previous results. Finally, small animal data obtained using a clinical CBCT scanner were observed for evidence of the missing cone. This study provides insight into the influence of incomplete data collection on the appearance of objects imaged in large field of view CBCT.
Assuntos
Artefatos , Tomografia Computadorizada de Feixe Cônico/métodos , Análise de Fourier , Modelos Biológicos , Animais , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , CoelhosRESUMO
We present a new image reconstruction algorithm for helical cone-beam computed tomography (CT). This algorithm is designed for data collected at or near maximum pitch, and provides a theoretically exact and stable reconstruction while beneficially using all measured data. The main operations involved are a differentiated backprojection and a finite-support Hilbert transform inversion. These operations are applied onto M-lines, and the beneficial use of all measured data is gained from averaging three volumes reconstructed each with a different choice of M-lines. The technique is overall similar to that presented by one of the authors in a previous publication, but operates volume-wise, instead of voxel-wise, which yields a significantly more efficient reconstruction procedure. The algorithm is presented in detail. Also, preliminary results from computer-simulated data are provided to demonstrate the numerical stability of the algorithm, the beneficial use of redundant data and the ability to process data collected with an angular flying focal spot.
Assuntos
Tomografia Computadorizada de Feixe Cônico/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Cabeça/diagnóstico por imagem , Modelos Biológicos , Imagens de Fantasmas , Radiografia Torácica , Reprodutibilidade dos TestesRESUMO
PURPOSE: The computational burden associated with model-based iterative reconstruction (MBIR) is still a practical limitation. Iterative coordinate descent (ICD) is an optimization approach for MBIR that has sometimes been thought to be incompatible with modern computing architectures, especially graphics processing units (GPUs). The purpose of this work is to accelerate the previously released open-source FreeCT_ICD to include GPU acceleration and to demonstrate computational performance with ICD that is comparable with simultaneous update approaches. METHODS: FreeCT_ICD uses a stored system matrix (SSM), which precalculates the forward projector in the form of a sparse matrix and then reconstructs on a rotating coordinate grid to exploit helical symmetry. In our GPU ICD implementation, we shuffle the sinogram memory ordering such that data access in the sinogram coalesce into fewer transactions. We also update NS voxels in the xy-plane simultaneously to improve occupancy. Conventional ICD updates voxels sequentially (NS = 1). Using NS > 1 eliminates existing convergence guarantees. Convergence behavior in a clinical dataset was therefore studied empirically. RESULTS: On a pediatric dataset with sinogram size of 736 × 16 × 13860 reconstructed to a matrix size of 512 × 512 × 128, our code requires about 20 s per iteration on a single GPU compared to 2300 s per iteration for a 6-core CPU using FreeCT_ICD. After 400 iterations, the proposed and reference codes converge within 2 HU RMS difference (RMSD). Using a wFBP initialization, convergence within 10 HU RMSD is achieved within 4 min. Convergence is similar with NS values between 1 and 256, and NS = 16 was sufficient to achieve maximum performance. Divergence was not observed until NS > 1024. CONCLUSIONS: With appropriate modifications, ICD may be able to achieve computational performance competitive with simultaneous update algorithms currently used for MBIR.
Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Criança , Computadores , Bases de Dados Factuais , Humanos , Fatores de Tempo , Tomografia Computadorizada por Raios XRESUMO
PURPOSE: Model-based iterative reconstruction is a promising approach to achieve dose reduction without affecting image quality in diagnostic x-ray computed tomography (CT). In the problem formulation, it is common to enforce non-negative values to accommodate the physical non-negativity of x-ray attenuation. Using this a priori information is believed to be beneficial in terms of image quality and convergence speed. However, enforcing non-negativity imposes limitations on the problem formulation and the choice of optimization algorithm. For these reasons, it is critical to understand the value of the non-negativity constraint. In this work, we present an investigation that sheds light on the impact of this constraint. METHODS: We primarily focus our investigation on the examination of properties of the converged solution. To avoid any possibly confounding bias, the reconstructions are all performed using a provably converging algorithm started from a zero volume. To keep the computational cost manageable, an axial CT scanning geometry with narrow collimation is employed. The investigation is divided into five experimental studies that challenge the non-negativity constraint in various ways, including noise, beam hardening, parametric choices, truncation, and photon starvation. These studies are complemented by a sixth one that examines the effect of using ordered subsets to obtain a satisfactory approximate result within 50 iterations. All studies are based on real data, which come from three phantom scans and one clinical patient scan. The reconstructions with and without the non-negativity constraint are compared in terms of image similarity and convergence speed. In select cases, the image similarity evaluation is augmented with quantitative image quality metrics such as the noise power spectrum and closeness to a known ground truth. RESULTS: For cases with moderate inconsistencies in the data, associated with noise and bone-induced beam hardening, our results show that the non-negativity constraint offers little benefit. By varying the regularization parameters in one of the studies, we observed that sufficient edge-preserving regularization tends to dilute the value of the constraint. For cases with strong data inconsistencies, the results are mixed: the constraint can be both beneficial and deleterious; in either case, however, the difference between using the constraint or not is small relative to the overall level of error in the image. The results with ordered subsets are encouraging in that they show similar observations. In terms of convergence speed, we only observed one major effect, in the study with data truncation; this effect favored the use of the constraint, but had no impact on our ability to obtain the converged solution without constraint. CONCLUSIONS: Our results did not highlight the non-negativity constraint as being strongly beneficial for diagnostic CT imaging. Altogether, we thus conclude that in some imaging scenarios, the non-negativity constraint could be disregarded to simplify the optimization problem or to adopt other forward projection models that require complex optimization machinery to be used together with non-negativity.