Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 65
Filtrar
1.
J Vasc Interv Radiol ; 34(3): 409-419.e2, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36529442

RESUMEN

PURPOSE: To investigate the utility and generalizability of deep learning subtraction angiography (DLSA) for generating synthetic digital subtraction angiography (DSA) images without misalignment artifacts. MATERIALS AND METHODS: DSA images and native digital angiograms of the cerebral, hepatic, and splenic vasculature, both with and without motion artifacts, were retrospectively collected. Images were divided into a motion-free training set (n = 66 patients, 9,161 images) and a motion artifact-containing test set (n = 22 patients, 3,322 images). Using the motion-free set, the deep neural network pix2pix was trained to produce synthetic DSA images without misalignment artifacts directly from native digital angiograms. After training, the algorithm was tested on digital angiograms of hepatic and splenic vasculature with substantial motion. Four board-certified radiologists evaluated performance via visual assessment using a 5-grade Likert scale. Subgroup analyses were performed to analyze the impact of transfer learning and generalizability to novel vasculature. RESULTS: Compared with the traditional DSA method, the proposed approach was found to generate synthetic DSA images with significantly fewer background artifacts (a mean rating of 1.9 [95% CI, 1.1-2.6] vs 3.5 [3.5-4.4]; P = .01) without a significant difference in foreground vascular detail (mean rating of 3.1 [2.6-3.5] vs 3.3 [2.8-3.8], P = .19) in both the hepatic and splenic vasculature. Transfer learning significantly improved the quality of generated images (P < .001). CONCLUSIONS: DLSA successfully generates synthetic angiograms without misalignment artifacts, is improved through transfer learning, and generalizes reliably to novel vasculature that was not included in the training data.


Asunto(s)
Aprendizaje Profundo , Humanos , Estudios Retrospectivos , Angiografía de Substracción Digital/métodos , Hígado , Artefactos
2.
IEEE Trans Med Imaging ; 42(3): 647-660, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36227827

RESUMEN

Deep-learning (DL) based CT image generation methods are often evaluated using RMSE and SSIM. By contrast, conventional model-based image reconstruction (MBIR) methods are often evaluated using image properties such as resolution, noise, bias. Calculating such image properties requires time consuming Monte Carlo (MC) simulations. For MBIR, linearized analysis using first order Taylor expansion has been developed to characterize noise and resolution without MC simulations. This inspired us to investigate if linearization can be applied to DL networks to enable efficient characterization of resolution and noise. We used FBPConvNet as an example DL network and performed extensive numerical evaluations, including both computer simulations and real CT data. Our results showed that network linearization works well under normal exposure settings. For such applications, linearization can characterize image noise and resolutions without running MC simulations. We provide with this work the computational tools to implement network linearization. The efficiency and ease of implementation of network linearization can hopefully popularize the physics-related image quality measures for DL applications. Our methodology is general; it allows flexible compositions of DL nonlinear modules and linear operators such as filtered-backprojection (FBP). For the latter, we develop a generic method for computing the covariance images that is needed for network linearization.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Tomografía Computarizada por Rayos X/métodos , Dosis de Radiación , Simulación por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Relación Señal-Ruido
3.
Med Phys ; 49(8): 5014-5037, 2022 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-35651302

RESUMEN

BACKGROUND: Various clinical studies show the potential for a wider quantitative role of diagnostic X-ray computed tomography (CT) beyond size measurements. Currently, the clinical use of attenuation values is, however, limited due to their lack of robustness. This issue can be observed even on the same scanner across patient size and positioning. There are different causes for the lack of robustness in the attenuation values; one possible source of error is beam hardening of the X-ray source spectrum. The conventional and well-established approach to address this issue is a calibration-based single material beam hardening correction (BHC) using a water cylinder. PURPOSE: We investigate an alternative approach for single-material BHC with the aim of producing a more robust result for the attenuation values. The underlying hypothesis of this investigation is that calibration-based BHC automatically corrects for scattered radiation in a manner that is suboptimal in terms of bias as soon as the scanned object strongly deviates from the water cylinder used for calibration. METHODS: The approach we propose performs BHC via an analytical energy response model that is embedded into a correction pipeline that efficiently estimates and subtracts scattered radiation in a patient-specific manner prior to BHC. The estimation of scattered radiation is based on minimizing, in average, the squared difference between our corrected data and the vendor-calibrated data. The used energy response model is considering the spectral effects of the detector response and the prefiltration of the source spectrum, including a beam-shaping bowtie filter. The performance of the correction pipeline is first characterized with computer simulated data. Afterward, it is tested using real 3-D CT data sets of two different phantoms, with various kV settings and phantom positions, assuming a circular data acquisition. The results are compared in the image domain to those from the scanner. RESULTS: For experiments with a water cylinder, the proposed correction pipeline leads to similar results as the vendor. For reconstructions of a QRM liver phantom with extension ring, the proposed correction pipeline achieved a more uniform and stable outcome in the attenuation values of homogeneous materials within the phantom. For example, the root mean squared deviation between centered and off-centered phantom positioning was reduced from 6.6 to 1.8 HU in one profile. CONCLUSIONS: We have introduced a patient-specific approach for single-material BHC in diagnostic CT via the use of an analytical energy response model. This approach shows promising improvements in terms of robustness of attenuation values for large patient sizes. Our results contribute toward improving CT images so as to make CT attenuation values more reliable for use in clinical practice.


Asunto(s)
Tomografía Computarizada por Rayos X , Agua , Algoritmos , Calibración , Humanos , Fantasmas de Imagen , Tomografía Computarizada por Rayos X/métodos , Rayos X
4.
J Med Imaging (Bellingham) ; 9(Suppl 1): 012205, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-35309720

RESUMEN

Purpose: For 50 years now, SPIE Medical Imaging (MI) conferences have been the premier forum for disseminating and sharing new ideas, technologies, and concepts on the physics of MI. Approach: Our overarching objective is to demonstrate and highlight the major trajectories of imaging physics and how they are informed by the community and science present and presented at SPIE MI conferences from its inception to now. Results: These contributions range from the development of image science, image quality metrology, and image reconstruction to digital x-ray detectors that have revolutionized MI modalities including radiography, mammography, fluoroscopy, tomosynthesis, and computed tomography (CT). Recent advances in detector technology such as photon-counting detectors continue to enable new capabilities in MI. Conclusion: As we celebrate the past 50 years, we are also excited about what the next 50 years of SPIE MI will bring to the physics of MI.

5.
Phys Med Biol ; 67(7)2022 03 23.
Artículo en Inglés | MEDLINE | ID: mdl-34757943

RESUMEN

The past decade has seen the rapid growth of model based image reconstruction (MBIR) algorithms, which are often applications or adaptations of convex optimization algorithms from the optimization community. We review some state-of-the-art algorithms that have enjoyed wide popularity in medical image reconstruction, emphasize known connections between different algorithms, and discuss practical issues such as computation and memory cost. More recently, deep learning (DL) has forayed into medical imaging, where the latest development tries to exploit the synergy between DL and MBIR to elevate the MBIR's performance. We present existing approaches and emerging trends in DL-enhanced MBIR methods, with particular attention to the underlying role of convexity and convex algorithms on network architecture. We also discuss how convexity can be employed to improve the generalizability and representation power of DL networks in general.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Algoritmos , Inteligencia Artificial , Encéfalo , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos
6.
Phys Med Biol ; 67(3)2022 02 16.
Artículo en Inglés | MEDLINE | ID: mdl-34920440

RESUMEN

We are interested in learning the hyperparameters in a convex objective function in a supervised setting. The complex relationship between the input data to the convex problem and the desirable hyperparameters can be modeled by a neural network; the hyperparameters and the data then drive the convex minimization problem, whose solution is then compared to training labels. In our previous work (Xu and Noo 2021Phys. Med. Biol.6619NT01), we evaluated a prototype of this learning strategy in an optimization-based sinogram smoothing plus FBP reconstruction framework. A question arising in this setting is how to efficiently compute (backpropagate) the gradient from the solution of the optimization problem, to the hyperparameters to enable end-to-end training. In this work, we first develop general formulas for gradient backpropagation for a subset of convex problems, namely the proximal mapping. To illustrate the value of the general formulas and to demonstrate how to use them, we consider the specific instance of 1D quadratic smoothing (denoising) whose solution admits a dynamic programming (DP) algorithm. The general formulas lead to another DP algorithm for exact computation of the gradient of the hyperparameters. Our numerical studies demonstrate a 55%-65% computation time savings by providing a custom gradient instead of relying on automatic differentiation in deep learning libraries. While our discussion focuses on 1D quadratic smoothing, our initial results (not presented) support the statement that the general formulas and the computational strategy apply equally well to TV or Huber smoothing problems on simple graphs whose solutions can be computed exactly via DP.


Asunto(s)
Algoritmos , Redes Neurales de la Computación
7.
Phys Med Biol ; 66(19)2021 09 20.
Artículo en Inglés | MEDLINE | ID: mdl-34186530

RESUMEN

We propose a hyperparameter learning framework that learnspatient-specifichyperparameters for optimization-based image reconstruction problems for x-ray CT applications. The framework consists of two functional modules: (1) a hyperparameter learning module parameterized by a convolutional neural network, (2) an image reconstruction module that takes as inputs both the noisy sinogram and the hyperparameters from (1) and generates the reconstructed images. As a proof-of-concept study, in this work we focus on a subclass of optimization-based image reconstruction problems with exactly computable solutions so that the whole network can be trained end-to-end in an efficient manner. Unlike existing hyperparameter learning methods, our proposed framework generates patient-specific hyperparameters from the sinogram of the same patient. Numerical studies demonstrate the effectiveness of our proposed approach compared to bi-level optimization.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Rayos X
8.
Phys Med Biol ; 65(18): 185016, 2020 09 18.
Artículo en Inglés | MEDLINE | ID: mdl-32512552

RESUMEN

Three-dimensional cone-beam imaging has become valuable in interventional radiology. Currently, this tool, referred to as C-arm CT, employs a circular short-scan for data acquisition, which limits the axial volume coverage and yields unavoidable cone-beam artifacts. To improve flexibility in axial coverage and image quality, there is a critical need for novel data acquisition geometries and related image reconstruction algorithms. For this purpose, we previously introduced the extended line-ellipse-line trajectory, which allows complete scanning of arbitrary volume lengths in the axial direction together with adjustable axial beam collimation, from narrow to wide depending on the targeted application. A first implementation of this trajectory on a state-of-the-art robotic angiography system is reported here. More specifically, an assessment of the quality of this first implementation is presented. The assessment is in terms of geometric fidelity and repeatability, complemented with a first visual inspection of how well the implementation enables imaging an anthropomorphic head phantom. The geometric fidelity analysis shows that the ideal trajectory is closely emulated, with only minor deviations that have no impact on data completeness and clinical practicality. Also, mean backprojection errors over short-term repetitions are shown to be below the detector pixel size at field-of-view center for most views, which indicates repeatability is satisfactory for clinical utilization. These repeatability observations are further supported by values of the Structural Similarity Index Metric above 94% for reconstructions of the FORBILD head phantom from computer-simulated data based on repeated data acquisition geometries. Last, the real data experiment with the anthropomorphic head phantom shows that the high contrast features of the phantom are well reconstructed without distortions as well as without breaks or other disturbing transition zones, which was not obvious given the complexity of the data acquisition geometry and the major variations in axial coverage that occur over the scan.


Asunto(s)
Angiografía por Tomografía Computarizada/instrumentación , Robótica , Algoritmos , Artefactos , Cabeza/irrigación sanguínea , Cabeza/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador , Fantasmas de Imagen
9.
IEEE Trans Med Imaging ; 39(7): 2327-2338, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-31995477

RESUMEN

Joint image reconstruction for multiphase CT can potentially improve image quality and reduce dose by leveraging the shared information among the phases. Multiphase CT scans are acquired sequentially. Inter-scan patient breathing causes small organ shifts and organ boundary misalignment among different phases. Existing multi-channel regularizers such as the joint total variation (TV) can introduce artifacts at misaligned organ boundaries. We propose a multi-channel regularizer using the infimal convolution (inf-conv) between a joint TV and a separable TV. It is robust against organ misalignment; it can work like a joint TV or a separable TV depending on a parameter setting. The effects of the parameter in the inf-conv regularizer are analyzed in detail. The properties of the inf-conv regularizer are then investigated numerically in a multi-channel image denoising setting. For algorithm implementation, the inf-conv regularizer is nonsmooth; inverse problems with the inf-conv regularizer can be solved using a number of primal-dual algorithms from nonsmooth convex minimization. Our numerical studies using synthesized 2-phase patient data and phantom data demonstrate that the inf-conv regularizer can largely maintain the advantages of the joint TV over the separable TV and reduce image artifacts of the joint TV due to organ misalignment.


Asunto(s)
Artefactos , Procesamiento de Imagen Asistido por Computador , Algoritmos , Humanos , Fantasmas de Imagen , Tomografía Computarizada por Rayos X
10.
IEEE Trans Radiat Plasma Med Sci ; 4(1): 63-80, 2020 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-33506155

RESUMEN

For situations of cone-beam scanning where the measurements are incomplete, we propose a method to quantify the severity of the missing information at each voxel. This incompleteness metric is geometric; it uses only the relative locations of all cone-beam vertices with respect to the voxel in question, and does not apply global information such as the object extent or the pattern of incompleteness of other voxels. The values are non-negative, with zero indicating "least incompleteness," i.e. minimal danger of incompleteness artifacts. The incompleteness value can be related to the severity of the potential reconstruction artifact at the voxel location, independent of reconstruction algorithm. We performed a computer simulation of x-ray sources along a circular trajectory, and used small multi-disk test-objects to examine the local effects of data incompleteness. The observed behavior of the reconstructed test-objects quantitatively matched the precalculated incompleteness values. A second simulation of a hypothetical SPECT breast imaging system used only 12 pinholes. Reconstructions were performed using analytic and iterative methods, and five reconstructed test-objects matched the behavior predicted by the incompleteness model. The model is based on known sufficiency conditions for data incompleteness, and provides strong predictive guidance for what can go wrong with incomplete cone-beam data.

11.
Med Phys ; 46(12): e835-e854, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31811793

RESUMEN

PURPOSE: Model-based iterative reconstruction is a promising approach to achieve dose reduction without affecting image quality in diagnostic x-ray computed tomography (CT). In the problem formulation, it is common to enforce non-negative values to accommodate the physical non-negativity of x-ray attenuation. Using this a priori information is believed to be beneficial in terms of image quality and convergence speed. However, enforcing non-negativity imposes limitations on the problem formulation and the choice of optimization algorithm. For these reasons, it is critical to understand the value of the non-negativity constraint. In this work, we present an investigation that sheds light on the impact of this constraint. METHODS: We primarily focus our investigation on the examination of properties of the converged solution. To avoid any possibly confounding bias, the reconstructions are all performed using a provably converging algorithm started from a zero volume. To keep the computational cost manageable, an axial CT scanning geometry with narrow collimation is employed. The investigation is divided into five experimental studies that challenge the non-negativity constraint in various ways, including noise, beam hardening, parametric choices, truncation, and photon starvation. These studies are complemented by a sixth one that examines the effect of using ordered subsets to obtain a satisfactory approximate result within 50 iterations. All studies are based on real data, which come from three phantom scans and one clinical patient scan. The reconstructions with and without the non-negativity constraint are compared in terms of image similarity and convergence speed. In select cases, the image similarity evaluation is augmented with quantitative image quality metrics such as the noise power spectrum and closeness to a known ground truth. RESULTS: For cases with moderate inconsistencies in the data, associated with noise and bone-induced beam hardening, our results show that the non-negativity constraint offers little benefit. By varying the regularization parameters in one of the studies, we observed that sufficient edge-preserving regularization tends to dilute the value of the constraint. For cases with strong data inconsistencies, the results are mixed: the constraint can be both beneficial and deleterious; in either case, however, the difference between using the constraint or not is small relative to the overall level of error in the image. The results with ordered subsets are encouraging in that they show similar observations. In terms of convergence speed, we only observed one major effect, in the study with data truncation; this effect favored the use of the constraint, but had no impact on our ability to obtain the converged solution without constraint. CONCLUSIONS: Our results did not highlight the non-negativity constraint as being strongly beneficial for diagnostic CT imaging. Altogether, we thus conclude that in some imaging scenarios, the non-negativity constraint could be disregarded to simplify the optimization problem or to adopt other forward projection models that require complex optimization machinery to be used together with non-negativity.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Modelos Teóricos , Tomografía Computarizada por Rayos X , Algoritmos , Artefactos , Cadera/diagnóstico por imagen , Humanos , Metales , Fantasmas de Imagen , Dosis de Radiación
12.
Med Phys ; 46(12): e801-e809, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31811796

RESUMEN

PURPOSE: The computational burden associated with model-based iterative reconstruction (MBIR) is still a practical limitation. Iterative coordinate descent (ICD) is an optimization approach for MBIR that has sometimes been thought to be incompatible with modern computing architectures, especially graphics processing units (GPUs). The purpose of this work is to accelerate the previously released open-source FreeCT_ICD to include GPU acceleration and to demonstrate computational performance with ICD that is comparable with simultaneous update approaches. METHODS: FreeCT_ICD uses a stored system matrix (SSM), which precalculates the forward projector in the form of a sparse matrix and then reconstructs on a rotating coordinate grid to exploit helical symmetry. In our GPU ICD implementation, we shuffle the sinogram memory ordering such that data access in the sinogram coalesce into fewer transactions. We also update NS voxels in the xy-plane simultaneously to improve occupancy. Conventional ICD updates voxels sequentially (NS  = 1). Using NS  > 1 eliminates existing convergence guarantees. Convergence behavior in a clinical dataset was therefore studied empirically. RESULTS: On a pediatric dataset with sinogram size of 736 × 16 × 13860 reconstructed to a matrix size of 512 × 512 × 128, our code requires about 20 s per iteration on a single GPU compared to 2300 s per iteration for a 6-core CPU using FreeCT_ICD. After 400 iterations, the proposed and reference codes converge within 2 HU RMS difference (RMSD). Using a wFBP initialization, convergence within 10 HU RMSD is achieved within 4 min. Convergence is similar with NS values between 1 and 256, and NS  = 16 was sufficient to achieve maximum performance. Divergence was not observed until NS  > 1024. CONCLUSIONS: With appropriate modifications, ICD may be able to achieve computational performance competitive with simultaneous update algorithms currently used for MBIR.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Niño , Computadores , Bases de Datos Factuales , Humanos , Factores de Tiempo , Tomografía Computarizada por Rayos X
14.
Med Phys ; 46(10): 4563-4574, 2019 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-31396974

RESUMEN

PURPOSE: An important challenge for deep learning models is generalizing to new datasets that may be acquired with acquisition protocols different from the training set. It is not always feasible to expand training data to the range encountered in clinical practice. We introduce a new technique, physics-based data augmentation (PBDA), that can emulate new computed tomography (CT) data acquisition protocols. We demonstrate two forms of PBDA, emulating increases in slice thickness and reductions of dose, on the specific problem of false-positive reduction in the automatic detection of lung nodules. METHODS: We worked with CT images from the lung image database consortium (LIDC) collection. We employed a hybrid ensemble convolutional neural network (CNN), which consists of multiple CNN modules (VGG, DenseNet, ResNet), for a classification task of determining whether an image patch was a suspicious nodule or a false positive. To emulate a reduction in tube current, we injected noise by simulating forward projection, noise addition, and backprojection corresponding to 1.5 mAs (a "chest x-ray" dose). To simulate thick slice CT scans from thin slice CT scans, we grouped and averaged spatially contiguous CT within thin slice data. The neural network was trained with 10% of the LIDC dataset that was selected to have either the highest tube current or the thinnest slices. The network was tested on the remaining data. We compared PBDA to a baseline with standard geometric augmentations (such as shifts and rotations) and Gaussian noise addition. RESULTS: PBDA improved the performance of the networks when generalizing to the test dataset in a limited number of cases. We found that the best performance was obtained by applying augmentation at very low doses (1.5 mAs), about an order of magnitude less than most screening protocols. In the baseline augmentation, a comparable level of Gaussian noise was injected. For dose reduction PBDA, the average sensitivity of 0.931 for the hybrid ensemble network was not statistically different from the average sensitivity of 0.935 without PBDA. Similarly for slice thickness PBDA, the average sensitivity of 0.900 when augmenting with doubled simulated slice thicknesses was not statistically different from the average sensitivity of 0.895 without PBDA. While there were cases detailed in this paper in which we observed improvements, the overall picture was one that suggests PBDA may not be an effective data enrichment tool. CONCLUSIONS: PBDA is a newly proposed strategy for mitigating the performance loss of neural networks related to the variation of acquisition protocol between the training dataset and the data that is encountered in deployment or testing. We found that PBDA does not provide robust improvements with the four neural networks (three modules and the ensemble) tested and for the specific task of false-positive reduction in nodule detection.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagen , Tomografía Computarizada por Rayos X , Reacciones Falso Positivas , Humanos , Distribución Normal , Dosis de Radiación , Sensibilidad y Especificidad
15.
Med Phys ; 2018 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-29858509

RESUMEN

PURPOSE: To facilitate investigations into the impacts of acquisition and reconstruction parameters on quantitative imaging, radiomics and CAD using CT imaging, we previously released an open-source implementation of a conventional weighted filtered backprojection reconstruction called FreeCT_wFBP. Our purpose was to extend that work by providing an open-source implementation of a model-based iterative reconstruction method using coordinate descent optimization, called FreeCT_ICD. METHODS: Model-based iterative reconstruction offers the potential for substantial radiation dose reduction, but can impose substantial computational processing and storage requirements. FreeCT_ICD is an open-source implementation of a model-based iterative reconstruction method that provides a reasonable tradeoff between these requirements. This was accomplished by adapting a previously proposed method that allows the system matrix to be stored with a reasonable memory requirement. The method amounts to describing the attenuation coefficient using rotating slices that follow the helical geometry. In the initially proposed version, the rotating slices are themselves described using blobs. We have replaced this description by a unique model that relies on trilinear interpolation together with the principles of Joseph's method. This model offers an improvement in memory requirement while still allowing highly accurate reconstruction for conventional CT geometries. The system matrix is stored column-wise and combined with an iterative coordinate descent (ICD) optimization. The result is FreeCT_ICD, which is a reconstruction program developed on the Linux platform using C++ libraries and released under the open-source GNU GPL v2.0 license. The software is capable of reconstructing raw projection data of helical CT scans. In this work, the software has been described and evaluated by reconstructing datasets exported from a clinical scanner which consisted of an ACR accreditation phantom dataset and a clinical pediatric thoracic scan. RESULTS: For the ACR phantom, image quality was comparable to clinical reconstructions as well as reconstructions using open-source FreeCT_wFBP software. The pediatric thoracic scan also yielded acceptable results. In addition, we did not observe any deleterious impact in image quality associated with the utilization of rotating slices. These evaluations also demonstrated reasonable tradeoffs in storage requirements and computational demands. CONCLUSION: FreeCT_ICD is an open-source implementation of a model-based iterative reconstruction method that extends the capabilities of previously released open-source reconstruction software and provides the ability to perform vendor-independent reconstructions of clinically acquired raw projection data. This implementation represents a reasonable tradeoff between storage and computational requirements and has demonstrated acceptable image quality in both simulated and clinical image datasets.

16.
IEEE Trans Med Imaging ; 37(1): 162-172, 2018 01.
Artículo en Inglés | MEDLINE | ID: mdl-28981412

RESUMEN

We present a direct (noniterative) algorithm for 1-D quadratic data fitting with neighboring intensity differences penalized by the Huber function. Applications of such an algorithm include 1-D processing of medical signals, such as smoothing of tissue time concentration curves in kinetic data analysis or sinogram preprocessing, and using it as a subproblem solver for 2-D or 3-D image restoration and reconstruction. dynamic programming was used to develop the direct algorithm. The problem was reformulated as a sequence of univariate optimization problems, for , where is the number of data points. The solution to the univariate problem at index is parameterized by the solution at , except at . Solving the univariate optimization problem at yields the solution to each problem in the sequence using back-tracking. Computational issues and memory cost are discussed in detail. Two numerical studies, tissue concentration curve smoothing and sinogram preprocessing for image reconstruction, are used to validate the direct algorithm and illustrate its practical applications. In the example of 1-D curve smoothing, the efficiency of the direct algorithm is compared with four iterative methods: the iterative coordinate descent, Nesterov's accelerated gradient descent algorithm, FISTA, and an off-the-shelf second order method. The first two methods were applied to the primal problem, the others to the dual problem. The comparisons show that the direct algorithm outperforms all other methods by a significant factor, which rapidly grows with the curvature of the Huber function. The second example, sinogram preprocessing, showed that robustness and speed of the direct algorithm are maintained over a wide range of signal variations, and that noise and streaking artifacts could be reduced with almost no increase in computation time. We also outline how the proposed 1-D solver can be used for imaging applications.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Abdomen/diagnóstico por imagen , Humanos , Modelos Estadísticos , Fantasmas de Imagen , Procesamiento de Señales Asistido por Computador
17.
Med Phys ; 44(9): e112, 2017 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-28901613
18.
Phys Med Biol ; 62(18): N428-N435, 2017 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-28862998

RESUMEN

We show that two problems involving the anisotropic total variation (TV) and interval constraints on the unknown variables admit, under some conditions, a simple sequential solution. Problem 1 is a constrained TV penalized image denoising problem; problem 2 is a constrained fused lasso signal approximator. The sequential solution entails finding first the solution to the unconstrained problem, and then applying a thresholding to satisfy the constraints. If the interval constraints are uniform, this sequential solution solves problem 1. If the interval constraints furthermore contain zero, the sequential solution solves problem 2. Here uniform interval constraints refer to all unknowns being constrained to the same interval. A typical example of application is image denoising in x-ray CT, where the image intensities are non-negative as they physically represent linear attenuation coefficient in the patient body. Our results are simple yet seem unknown; we establish them using the Karush-Kuhn-Tucker conditions for constrained convex optimization.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/normas , Modelos Teóricos , Neoplasias/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Anisotropía , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Relación Señal-Ruido
19.
Med Phys ; 44(4): 1337-1346, 2017 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-28122122

RESUMEN

PURPOSE: Lung cancer screening with low-dose CT has recently been approved for reimbursement, heralding the arrival of such screening services worldwide. Computer-aided detection (CAD) tools offer the potential to assist radiologists in detecting nodules in these screening exams. In lung screening, as in all CT exams, there is interest in further reducing radiation dose. However, the effects of continued dose reduction on CAD performance are not fully understood. In this work, we investigated the effect of reducing radiation dose on CAD lung nodule detection performance in a screening population. METHODS: The raw projection data files were collected from 481 patients who underwent low-dose screening CT exams at our institution as part of the National Lung Screening Trial (NLST). All scans were performed on a multidetector scanner (Sensation 64, Siemens Healthcare, Forchheim Germany) according to the NLST protocol, which called for a fixed tube current scan of 25 effective mAs for standard-sized patients and 40 effective mAs for larger patients. The raw projection data were input to a reduced-dose simulation software to create simulated reduced-dose scans corresponding to 50% and 25% of the original protocols. All raw data files were reconstructed at the scanner with 1 mm slice thickness and B50 kernel. The lungs were segmented semi-automatically, and all images and segmentations were input to an in-house CAD algorithm trained on higher dose scans (75-300 mAs). CAD findings were compared to a reference standard generated by an experienced reader. Nodule- and patient-level sensitivities were calculated along with false positives per scan, all of which were evaluated in terms of the relative change with respect to dose. Nodules were subdivided based on size and solidity into categories analogous to the LungRADS assessment categories, and sub-analyses were performed. RESULTS: From the 481 patients in this study, 82 had at least one nodule (prevalence of 17%) and 399 did not (83%). A total of 118 nodules were identified. Twenty-seven nodules (23%) corresponded to LungRADS category 4 based on size and composition, while 18 (15%) corresponded to LungRADS category 3 and 73 (61%) corresponded to LungRADS category 2. For solid nodules ≥8 mm, patient-level median sensitivities were 100% at all three dose levels, and mean sensitivities were 72%, 63%, and 63% at original, 50%, and 25% dose, respectively. Overall mean patient-level sensitivities for nodules ranging from 3 to 45 mm were 38%, 37%, and 38% at original, 50%, and 25% dose due to the prevalence of smaller nodules and nonsolid nodules in our reference standard. The mean false-positive rates were 3, 5, and 13 per case. CONCLUSIONS: CAD sensitivity decreased very slightly for larger nodules as dose was reduced, indicating that reducing the dose to 50% of original levels may be investigated further for use in CT screening. However, the effect of dose was small relative to the effect of the nodule size and solidity characteristics. The number of false positives per scan increased substantially at 25% dose, illustrating the importance of tuning CAD algorithms to very challenging, high-noise screening exams.


Asunto(s)
Diagnóstico por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagen , Tamizaje Masivo/métodos , Dosis de Radiación , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Humanos
20.
Med Phys ; 43(12): 6455, 2016 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-27908185

RESUMEN

PURPOSE: Recent reports indicate that model-based iterative reconstruction methods may improve image quality in computed tomography (CT). One difficulty with these methods is the number of options available to implement them, including the selection of the forward projection model and the penalty term. Currently, the literature is fairly scarce in terms of guidance regarding this selection step, whereas these options impact image quality. Here, the authors investigate the merits of three forward projection models that rely on linear interpolation: the distance-driven method, Joseph's method, and the bilinear method. The authors' selection is motivated by three factors: (1) in CT, linear interpolation is often seen as a suitable trade-off between discretization errors and computational cost, (2) the first two methods are popular with manufacturers, and (3) the third method enables assessing the importance of a key assumption in the other methods. METHODS: One approach to evaluate forward projection models is to inspect their effect on discretized images, as well as the effect of their transpose on data sets, but significance of such studies is unclear since the matrix and its transpose are always jointly used in iterative reconstruction. Another approach is to investigate the models in the context they are used, i.e., together with statistical weights and a penalty term. Unfortunately, this approach requires the selection of a preferred objective function and does not provide clear information on features that are intrinsic to the model. The authors adopted the following two-stage methodology. First, the authors analyze images that progressively include components of the singular value decomposition of the model in a reconstructed image without statistical weights and penalty term. Next, the authors examine the impact of weights and penalty on observed differences. RESULTS: Image quality metrics were investigated for 16 different fan-beam imaging scenarios that enabled probing various aspects of all models. The metrics include a surrogate for computational cost, as well as bias, noise, and an estimation task, all at matched resolution. The analysis revealed fundamental differences in terms of both bias and noise. Task-based assessment appears to be required to appreciate the differences in noise; the estimation task the authors selected showed that these differences balance out to yield similar performance. Some scenarios highlighted merits for the distance-driven method in terms of bias but with an increase in computational cost. Three combinations of statistical weights and penalty term showed that the observed differences remain the same, but strong edge-preserving penalty can dramatically reduce the magnitude of these differences. CONCLUSIONS: In many scenarios, Joseph's method seems to offer an interesting compromise between cost and computational effort. The distance-driven method offers the possibility to reduce bias but with an increase in computational cost. The bilinear method indicated that a key assumption in the other two methods is highly robust. Last, strong edge-preserving penalty can act as a compensator for insufficiencies in the forward projection model, bringing all models to similar levels in the most challenging imaging scenarios. Also, the authors find that their evaluation methodology helps appreciating how model, statistical weights, and penalty term interplay together.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Modelos Teóricos , Tomografía Computarizada por Rayos X , Modelos Lineales , Relación Señal-Ruido , Factores de Tiempo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...