Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 65
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
J Vasc Interv Radiol ; 34(3): 409-419.e2, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36529442

RESUMEN

PURPOSE: To investigate the utility and generalizability of deep learning subtraction angiography (DLSA) for generating synthetic digital subtraction angiography (DSA) images without misalignment artifacts. MATERIALS AND METHODS: DSA images and native digital angiograms of the cerebral, hepatic, and splenic vasculature, both with and without motion artifacts, were retrospectively collected. Images were divided into a motion-free training set (n = 66 patients, 9,161 images) and a motion artifact-containing test set (n = 22 patients, 3,322 images). Using the motion-free set, the deep neural network pix2pix was trained to produce synthetic DSA images without misalignment artifacts directly from native digital angiograms. After training, the algorithm was tested on digital angiograms of hepatic and splenic vasculature with substantial motion. Four board-certified radiologists evaluated performance via visual assessment using a 5-grade Likert scale. Subgroup analyses were performed to analyze the impact of transfer learning and generalizability to novel vasculature. RESULTS: Compared with the traditional DSA method, the proposed approach was found to generate synthetic DSA images with significantly fewer background artifacts (a mean rating of 1.9 [95% CI, 1.1-2.6] vs 3.5 [3.5-4.4]; P = .01) without a significant difference in foreground vascular detail (mean rating of 3.1 [2.6-3.5] vs 3.3 [2.8-3.8], P = .19) in both the hepatic and splenic vasculature. Transfer learning significantly improved the quality of generated images (P < .001). CONCLUSIONS: DLSA successfully generates synthetic angiograms without misalignment artifacts, is improved through transfer learning, and generalizes reliably to novel vasculature that was not included in the training data.


Asunto(s)
Aprendizaje Profundo , Humanos , Estudios Retrospectivos , Angiografía de Substracción Digital/métodos , Hígado , Artefactos
2.
IEEE Trans Nucl Sci ; 63(3): 1359-1366, 2016 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-27499550

RESUMEN

Positron emission tomography (PET) images are typically reconstructed with an in-plane pixel size of approximately 4mm for cancer imaging. The objective of this work was to evaluate the effect of using smaller pixels on general oncologic lesion-detection. A series of observer studies was performed using experimental phantom data from the Utah PET Lesion Detection Database, which modeled whole-body FDG PET cancer imaging of a 92kg patient. The data comprised 24 scans over 4 days on a Biograph mCT time-of-flight (TOF) PET/CT scanner, with up to 23 lesions (diam. 6-16mm) distributed throughout the phantom each day. Images were reconstructed with 2.036mm and 4.073mm pixels using ordered-subsets expectation-maximization (OSEM) both with and without point spread function (PSF) modeling and TOF. Detection performance was assessed using the channelized non-prewhitened numerical observer with localization receiver operating characteristic (LROC) analysis. Tumor localization performance and the area under the LROC curve were then analyzed as functions of the pixel size. In all cases, the images with ~2mm pixels provided higher detection performance than those with ~4mm pixels. The degree of improvement from the smaller pixels was larger than that offered by PSF modeling for these data, and provided roughly half the benefit of using TOF. Key results were confirmed by two human observers, who read subsets of the test data. This study suggests that a significant improvement in tumor detection performance for PET can be attained by using smaller voxel sizes than commonly used at many centers. The primary drawback is a 4-fold increase in reconstruction time and data storage requirements.

3.
IEEE Trans Nucl Sci ; 60(1): 182-193, 2013 Jan 11.
Artículo en Inglés | MEDLINE | ID: mdl-24436497

RESUMEN

Task-based assessments of image quality constitute a rigorous, principled approach to the evaluation of imaging system performance. To conduct such assessments, it has been recognized that mathematical model observers are very useful, particularly for purposes of imaging system development and optimization. One type of model observer that has been widely applied in the medical imaging community is the channelized Hotelling observer (CHO). Since estimates of CHO performance typically include statistical variability, it is important to control and limit this variability to maximize the statistical power of image-quality studies. In a previous paper, we demonstrated that by including prior knowledge of the image class means, a large decrease in the bias and variance of CHO performance estimates can be realized. The purpose of the present work is to present refinements and extensions of the estimation theory given in our previous paper, which was limited to point estimation with equal numbers of images from each class. Specifically, we present and characterize minimum-variance unbiased point estimators for observer signal-to-noise ratio (SNR) that allow for unequal numbers of lesion-absent and lesion-present images. Building on this SNR point estimation theory, we then show that confidence intervals with exactly-known coverage probabilities can be constructed for commonly-used CHO performance measures. Moreover, we propose simple, approximate confidence intervals for CHO performance, and we show that they are well-behaved in most scenarios of interest.

4.
IEEE Trans Med Imaging ; 42(3): 647-660, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36227827

RESUMEN

Deep-learning (DL) based CT image generation methods are often evaluated using RMSE and SSIM. By contrast, conventional model-based image reconstruction (MBIR) methods are often evaluated using image properties such as resolution, noise, bias. Calculating such image properties requires time consuming Monte Carlo (MC) simulations. For MBIR, linearized analysis using first order Taylor expansion has been developed to characterize noise and resolution without MC simulations. This inspired us to investigate if linearization can be applied to DL networks to enable efficient characterization of resolution and noise. We used FBPConvNet as an example DL network and performed extensive numerical evaluations, including both computer simulations and real CT data. Our results showed that network linearization works well under normal exposure settings. For such applications, linearization can characterize image noise and resolutions without running MC simulations. We provide with this work the computational tools to implement network linearization. The efficiency and ease of implementation of network linearization can hopefully popularize the physics-related image quality measures for DL applications. Our methodology is general; it allows flexible compositions of DL nonlinear modules and linear operators such as filtered-backprojection (FBP). For the latter, we develop a generic method for computing the covariance images that is needed for network linearization.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Tomografía Computarizada por Rayos X/métodos , Dosis de Radiación , Simulación por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Relación Señal-Ruido
5.
Med Phys ; 39(3): 1530-41, 2012 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-22380385

RESUMEN

PURPOSE: Dedicated breast CT prototypes used in clinical investigations utilize single circular source trajectory and cone-beam geometry with flat-panel detectors that do not satisfy data-sufficiency conditions and could lead to cone beam artifacts. Hence, this work investigated the glandular dose characteristics of a circle-plus-line trajectory that fulfills data-sufficiency conditions for image reconstruction in dedicated breast CT. METHODS: Monte Carlo-based computer simulations were performed using the GEANT4 toolkit and was validated with previously reported normalized glandular dose coefficients for one prototype breast CT system. Upon validation, Monte Carlo simulations were performed to determine the normalized glandular dose coefficients as a function of x-ray source position along the line scan. The source-to-axis of rotation distance and the source-to-detector distance were maintained constant at 65 and 100 cm, respectively, in all simulations. The ratio of the normalized glandular dose coefficient at each source position along the line scan to that for the circular scan, defined as relative normalized glandular dose coefficient (RD(g)N), was studied by varying the diameter of the breast at the chest wall, chest-wall to nipple distance, skin thickness, x-ray beam energy, and glandular fraction of the breast. RESULTS: The RD(g)N metric when stated as a function of source position along the line scan, relative to the maximum length of line scan needed for data sufficiency, was found to be minimally dependent on breast diameter, chest-wall to nipple distance, skin thickness, glandular fraction, and x-ray photon energy. This observation facilitates easy estimation of the average glandular dose of the line scan. Polynomial fit equations for computing the RD(g)N and hence the average glandular dose are provided. CONCLUSIONS: For a breast CT system that acquires 300-500 projections over 2π for the circular scan, the addition of a line trajectory with equal source spacing and constant x-ray beam quality (kVp and HVL) and mAs matched to the circular scan, will result in less than 0.18% increase in average glandular dose to the breast per projection along the line scan.


Asunto(s)
Mamografía/métodos , Dosis de Radiación , Método de Montecarlo , Reproducibilidad de los Resultados
6.
IEEE Trans Nucl Sci ; 59(3): 568-578, 2012 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-23335815

RESUMEN

This paper is motivated by the problem of image-quality assessment using model observers for the purpose of development and optimization of medical imaging systems. Specifically, we present a study regarding the estimation of the receiver operating characteristic (ROC) curve for the observer and associated summary measures. This study evaluates the statistical advantage that may be gained in ROC estimates of observer performance by assuming that the difference of the class means for the observer ratings is known. Such knowledge is frequently available in image-quality studies employing known-location lesion detection tasks together with linear model observers. The study is carried out by introducing parametric point and confidence interval estimators that incorporate a known difference of class means. An evaluation of the new estimators for the area under the ROC curve establishes that a large reduction in statistical variability can be achieved through incorporation of knowledge of the difference of class means. Namely, the mean 95% AUC confidence interval length can be as much as seven times smaller in some cases. We also examine how knowledge of the difference of class means can be advantageously used to compare the areas under two correlated ROC curves, and observe similar gains.

7.
Phys Med Biol ; 67(7)2022 03 23.
Artículo en Inglés | MEDLINE | ID: mdl-34757943

RESUMEN

The past decade has seen the rapid growth of model based image reconstruction (MBIR) algorithms, which are often applications or adaptations of convex optimization algorithms from the optimization community. We review some state-of-the-art algorithms that have enjoyed wide popularity in medical image reconstruction, emphasize known connections between different algorithms, and discuss practical issues such as computation and memory cost. More recently, deep learning (DL) has forayed into medical imaging, where the latest development tries to exploit the synergy between DL and MBIR to elevate the MBIR's performance. We present existing approaches and emerging trends in DL-enhanced MBIR methods, with particular attention to the underlying role of convexity and convex algorithms on network architecture. We also discuss how convexity can be employed to improve the generalizability and representation power of DL networks in general.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Algoritmos , Inteligencia Artificial , Encéfalo , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos
8.
Phys Med Biol ; 67(3)2022 02 16.
Artículo en Inglés | MEDLINE | ID: mdl-34920440

RESUMEN

We are interested in learning the hyperparameters in a convex objective function in a supervised setting. The complex relationship between the input data to the convex problem and the desirable hyperparameters can be modeled by a neural network; the hyperparameters and the data then drive the convex minimization problem, whose solution is then compared to training labels. In our previous work (Xu and Noo 2021Phys. Med. Biol.6619NT01), we evaluated a prototype of this learning strategy in an optimization-based sinogram smoothing plus FBP reconstruction framework. A question arising in this setting is how to efficiently compute (backpropagate) the gradient from the solution of the optimization problem, to the hyperparameters to enable end-to-end training. In this work, we first develop general formulas for gradient backpropagation for a subset of convex problems, namely the proximal mapping. To illustrate the value of the general formulas and to demonstrate how to use them, we consider the specific instance of 1D quadratic smoothing (denoising) whose solution admits a dynamic programming (DP) algorithm. The general formulas lead to another DP algorithm for exact computation of the gradient of the hyperparameters. Our numerical studies demonstrate a 55%-65% computation time savings by providing a custom gradient instead of relying on automatic differentiation in deep learning libraries. While our discussion focuses on 1D quadratic smoothing, our initial results (not presented) support the statement that the general formulas and the computational strategy apply equally well to TV or Huber smoothing problems on simple graphs whose solutions can be computed exactly via DP.


Asunto(s)
Algoritmos , Redes Neurales de la Computación
9.
Med Phys ; 49(8): 5014-5037, 2022 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-35651302

RESUMEN

BACKGROUND: Various clinical studies show the potential for a wider quantitative role of diagnostic X-ray computed tomography (CT) beyond size measurements. Currently, the clinical use of attenuation values is, however, limited due to their lack of robustness. This issue can be observed even on the same scanner across patient size and positioning. There are different causes for the lack of robustness in the attenuation values; one possible source of error is beam hardening of the X-ray source spectrum. The conventional and well-established approach to address this issue is a calibration-based single material beam hardening correction (BHC) using a water cylinder. PURPOSE: We investigate an alternative approach for single-material BHC with the aim of producing a more robust result for the attenuation values. The underlying hypothesis of this investigation is that calibration-based BHC automatically corrects for scattered radiation in a manner that is suboptimal in terms of bias as soon as the scanned object strongly deviates from the water cylinder used for calibration. METHODS: The approach we propose performs BHC via an analytical energy response model that is embedded into a correction pipeline that efficiently estimates and subtracts scattered radiation in a patient-specific manner prior to BHC. The estimation of scattered radiation is based on minimizing, in average, the squared difference between our corrected data and the vendor-calibrated data. The used energy response model is considering the spectral effects of the detector response and the prefiltration of the source spectrum, including a beam-shaping bowtie filter. The performance of the correction pipeline is first characterized with computer simulated data. Afterward, it is tested using real 3-D CT data sets of two different phantoms, with various kV settings and phantom positions, assuming a circular data acquisition. The results are compared in the image domain to those from the scanner. RESULTS: For experiments with a water cylinder, the proposed correction pipeline leads to similar results as the vendor. For reconstructions of a QRM liver phantom with extension ring, the proposed correction pipeline achieved a more uniform and stable outcome in the attenuation values of homogeneous materials within the phantom. For example, the root mean squared deviation between centered and off-centered phantom positioning was reduced from 6.6 to 1.8 HU in one profile. CONCLUSIONS: We have introduced a patient-specific approach for single-material BHC in diagnostic CT via the use of an analytical energy response model. This approach shows promising improvements in terms of robustness of attenuation values for large patient sizes. Our results contribute toward improving CT images so as to make CT attenuation values more reliable for use in clinical practice.


Asunto(s)
Tomografía Computarizada por Rayos X , Agua , Algoritmos , Calibración , Humanos , Fantasmas de Imagen , Tomografía Computarizada por Rayos X/métodos , Rayos X
10.
J Med Imaging (Bellingham) ; 9(Suppl 1): 012205, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-35309720

RESUMEN

Purpose: For 50 years now, SPIE Medical Imaging (MI) conferences have been the premier forum for disseminating and sharing new ideas, technologies, and concepts on the physics of MI. Approach: Our overarching objective is to demonstrate and highlight the major trajectories of imaging physics and how they are informed by the community and science present and presented at SPIE MI conferences from its inception to now. Results: These contributions range from the development of image science, image quality metrology, and image reconstruction to digital x-ray detectors that have revolutionized MI modalities including radiography, mammography, fluoroscopy, tomosynthesis, and computed tomography (CT). Recent advances in detector technology such as photon-counting detectors continue to enable new capabilities in MI. Conclusion: As we celebrate the past 50 years, we are also excited about what the next 50 years of SPIE MI will bring to the physics of MI.

11.
Phys Med Biol ; 66(19)2021 09 20.
Artículo en Inglés | MEDLINE | ID: mdl-34186530

RESUMEN

We propose a hyperparameter learning framework that learnspatient-specifichyperparameters for optimization-based image reconstruction problems for x-ray CT applications. The framework consists of two functional modules: (1) a hyperparameter learning module parameterized by a convolutional neural network, (2) an image reconstruction module that takes as inputs both the noisy sinogram and the hyperparameters from (1) and generates the reconstructed images. As a proof-of-concept study, in this work we focus on a subclass of optimization-based image reconstruction problems with exactly computable solutions so that the whole network can be trained end-to-end in an efficient manner. Unlike existing hyperparameter learning methods, our proposed framework generates patient-specific hyperparameters from the sinogram of the same patient. Numerical studies demonstrate the effectiveness of our proposed approach compared to bi-level optimization.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Rayos X
12.
Tsinghua Sci Technol ; 15(1): 36-43, 2010 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-21814455

RESUMEN

We present a theoretically-exact and stable computed tomography (CT) reconstruction algorithm that is capable of handling interrupted illumination and therefore of using all measured data at arbitrary pitch. This algorithm is based on a differentiated backprojection (DBP) on M-lines. First, we discuss the problem of interrupted illumination and how it affects the DBP. Then we show that it is possible to take advantage of some properties of the DBP to compensate for the effects of interrupted illumination in a mathematically exact way. From there, we have developed an efficient algorithm which we have successfully implemented. We show encouraging preliminary results using both computer-simulated data and real data. Our results show that our method is capable of achieving a substantial reduction of image noise when decreasing the helix pitch compared with the maximum pitch case. We conclude that the proposed algorithm defines for the first time a theoretically-exact and stable reconstruction method that is capable of beneficially using all measured data at arbitrary pitch.

13.
Tsinghua Sci Technol ; 15(1): 17-24, 2010 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-20585466

RESUMEN

A direct filtered-backprojection (FBP) reconstruction algorithm is presented for circular cone-beam computed tomography (CB-CT) that allows the filter operation to be applied efficiently with shift-variant band-pass characteristics on the kernel function. Our algorithm is derived from the ramp-filter based FBP method of Feldkamp et al. and obtained by decomposing the ramp filtering into a convolution involving the Hilbert kernel (global operation) and a subsequent differentiation operation (local operation). The differentiation is implemented as a finite difference of two (Hilbert filtered) data samples and carried out as part of the backprojection step. The spacing between the two samples, which defines the low-pass characteristics of the filter operation, can thus be selected individually for each point in the image volume. We here define the sample spacing to follow the magnification of the divergent-beam geometry and thus obtain a novel, depth-dependent filtering algorithm for circular CB-CT. We evaluate this resulting algorithm using computer-simulated CB data and demonstrate that our algorithm yields results where spatial resolution and image noise are distributed much more uniformly over the field-of-view, compared to Feldkamp's approach.

14.
IEEE Trans Radiat Plasma Med Sci ; 4(1): 63-80, 2020 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-33506155

RESUMEN

For situations of cone-beam scanning where the measurements are incomplete, we propose a method to quantify the severity of the missing information at each voxel. This incompleteness metric is geometric; it uses only the relative locations of all cone-beam vertices with respect to the voxel in question, and does not apply global information such as the object extent or the pattern of incompleteness of other voxels. The values are non-negative, with zero indicating "least incompleteness," i.e. minimal danger of incompleteness artifacts. The incompleteness value can be related to the severity of the potential reconstruction artifact at the voxel location, independent of reconstruction algorithm. We performed a computer simulation of x-ray sources along a circular trajectory, and used small multi-disk test-objects to examine the local effects of data incompleteness. The observed behavior of the reconstructed test-objects quantitatively matched the precalculated incompleteness values. A second simulation of a hypothetical SPECT breast imaging system used only 12 pinholes. Reconstructions were performed using analytic and iterative methods, and five reconstructed test-objects matched the behavior predicted by the incompleteness model. The model is based on known sufficiency conditions for data incompleteness, and provides strong predictive guidance for what can go wrong with incomplete cone-beam data.

15.
IEEE Trans Med Imaging ; 39(7): 2327-2338, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-31995477

RESUMEN

Joint image reconstruction for multiphase CT can potentially improve image quality and reduce dose by leveraging the shared information among the phases. Multiphase CT scans are acquired sequentially. Inter-scan patient breathing causes small organ shifts and organ boundary misalignment among different phases. Existing multi-channel regularizers such as the joint total variation (TV) can introduce artifacts at misaligned organ boundaries. We propose a multi-channel regularizer using the infimal convolution (inf-conv) between a joint TV and a separable TV. It is robust against organ misalignment; it can work like a joint TV or a separable TV depending on a parameter setting. The effects of the parameter in the inf-conv regularizer are analyzed in detail. The properties of the inf-conv regularizer are then investigated numerically in a multi-channel image denoising setting. For algorithm implementation, the inf-conv regularizer is nonsmooth; inverse problems with the inf-conv regularizer can be solved using a number of primal-dual algorithms from nonsmooth convex minimization. Our numerical studies using synthesized 2-phase patient data and phantom data demonstrate that the inf-conv regularizer can largely maintain the advantages of the joint TV over the separable TV and reduce image artifacts of the joint TV due to organ misalignment.


Asunto(s)
Artefactos , Procesamiento de Imagen Asistido por Computador , Algoritmos , Humanos , Fantasmas de Imagen , Tomografía Computarizada por Rayos X
16.
Phys Med Biol ; 65(18): 185016, 2020 09 18.
Artículo en Inglés | MEDLINE | ID: mdl-32512552

RESUMEN

Three-dimensional cone-beam imaging has become valuable in interventional radiology. Currently, this tool, referred to as C-arm CT, employs a circular short-scan for data acquisition, which limits the axial volume coverage and yields unavoidable cone-beam artifacts. To improve flexibility in axial coverage and image quality, there is a critical need for novel data acquisition geometries and related image reconstruction algorithms. For this purpose, we previously introduced the extended line-ellipse-line trajectory, which allows complete scanning of arbitrary volume lengths in the axial direction together with adjustable axial beam collimation, from narrow to wide depending on the targeted application. A first implementation of this trajectory on a state-of-the-art robotic angiography system is reported here. More specifically, an assessment of the quality of this first implementation is presented. The assessment is in terms of geometric fidelity and repeatability, complemented with a first visual inspection of how well the implementation enables imaging an anthropomorphic head phantom. The geometric fidelity analysis shows that the ideal trajectory is closely emulated, with only minor deviations that have no impact on data completeness and clinical practicality. Also, mean backprojection errors over short-term repetitions are shown to be below the detector pixel size at field-of-view center for most views, which indicates repeatability is satisfactory for clinical utilization. These repeatability observations are further supported by values of the Structural Similarity Index Metric above 94% for reconstructions of the FORBILD head phantom from computer-simulated data based on repeated data acquisition geometries. Last, the real data experiment with the anthropomorphic head phantom shows that the high contrast features of the phantom are well reconstructed without distortions as well as without breaks or other disturbing transition zones, which was not obvious given the complexity of the data acquisition geometry and the major variations in axial coverage that occur over the scan.


Asunto(s)
Angiografía por Tomografía Computarizada/instrumentación , Robótica , Algoritmos , Artefactos , Cabeza/irrigación sanguínea , Cabeza/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador , Fantasmas de Imagen
17.
Med Phys ; 36(2): 500-12, 2009 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-19291989

RESUMEN

Large field of view cone-beam computed tomography (CBCT) is being achieved using circular source and detector trajectories. These circular trajectories are known to collect insufficient data for accurate image reconstruction. Although various descriptions of the missing information exist, the manifestation of this lack of data in reconstructed images is generally nonintuitive. One model predicts that the missing information corresponds to a shift-variant cone of missing frequency components. This description implies that artifacts depend on the imaging geometry, as well as the frequency content of the imaged object. In particular, objects with a large proportion of energy distributed over frequency bands that coincide with the missing cone will be most compromised. These predictions were experimentally verified by imaging small, localized objects (acrylic spheres, stacked disks) at varying positions in the object space and observing the frequency spectrums of the reconstructions. Measurements of the internal angle of the missing cone agreed well with theory, indicating a right circular cone for points on the rotation axis, and an oblique, circular cone elsewhere. In the former case, the largest internal angle with respect to the vertical axis corresponds to the (half) cone angle of the CBCT system (typically approximately 5 degrees - 7.5 degrees in IGRT). Object recovery was also found to be strongly dependent on the distribution of the object's frequency spectrum relative to the missing cone, as expected. The observed artifacts were also reproducible via removal of local frequency components, further supporting the theoretical model. Larger objects with differing internal structures (cellular polyurethane, solid acrylic) were also imaged and interpreted with respect to the previous results. Finally, small animal data obtained using a clinical CBCT scanner were observed for evidence of the missing cone. This study provides insight into the influence of incomplete data collection on the appearance of objects imaged in large field of view CBCT.


Asunto(s)
Artefactos , Tomografía Computarizada de Haz Cónico/métodos , Análisis de Fourier , Modelos Biológicos , Animales , Procesamiento de Imagen Asistido por Computador , Fantasmas de Imagen , Conejos
18.
Phys Med Biol ; 54(15): 4625-44, 2009 Aug 07.
Artículo en Inglés | MEDLINE | ID: mdl-19590120

RESUMEN

We present a new image reconstruction algorithm for helical cone-beam computed tomography (CT). This algorithm is designed for data collected at or near maximum pitch, and provides a theoretically exact and stable reconstruction while beneficially using all measured data. The main operations involved are a differentiated backprojection and a finite-support Hilbert transform inversion. These operations are applied onto M-lines, and the beneficial use of all measured data is gained from averaging three volumes reconstructed each with a different choice of M-lines. The technique is overall similar to that presented by one of the authors in a previous publication, but operates volume-wise, instead of voxel-wise, which yields a significantly more efficient reconstruction procedure. The algorithm is presented in detail. Also, preliminary results from computer-simulated data are provided to demonstrate the numerical stability of the algorithm, the beneficial use of redundant data and the ability to process data collected with an angular flying focal spot.


Asunto(s)
Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Cabeza/diagnóstico por imagen , Modelos Biológicos , Fantasmas de Imagen , Radiografía Torácica , Reproducibilidad de los Resultados
19.
Med Phys ; 46(12): e801-e809, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31811796

RESUMEN

PURPOSE: The computational burden associated with model-based iterative reconstruction (MBIR) is still a practical limitation. Iterative coordinate descent (ICD) is an optimization approach for MBIR that has sometimes been thought to be incompatible with modern computing architectures, especially graphics processing units (GPUs). The purpose of this work is to accelerate the previously released open-source FreeCT_ICD to include GPU acceleration and to demonstrate computational performance with ICD that is comparable with simultaneous update approaches. METHODS: FreeCT_ICD uses a stored system matrix (SSM), which precalculates the forward projector in the form of a sparse matrix and then reconstructs on a rotating coordinate grid to exploit helical symmetry. In our GPU ICD implementation, we shuffle the sinogram memory ordering such that data access in the sinogram coalesce into fewer transactions. We also update NS voxels in the xy-plane simultaneously to improve occupancy. Conventional ICD updates voxels sequentially (NS  = 1). Using NS  > 1 eliminates existing convergence guarantees. Convergence behavior in a clinical dataset was therefore studied empirically. RESULTS: On a pediatric dataset with sinogram size of 736 × 16 × 13860 reconstructed to a matrix size of 512 × 512 × 128, our code requires about 20 s per iteration on a single GPU compared to 2300 s per iteration for a 6-core CPU using FreeCT_ICD. After 400 iterations, the proposed and reference codes converge within 2 HU RMS difference (RMSD). Using a wFBP initialization, convergence within 10 HU RMSD is achieved within 4 min. Convergence is similar with NS values between 1 and 256, and NS  = 16 was sufficient to achieve maximum performance. Divergence was not observed until NS  > 1024. CONCLUSIONS: With appropriate modifications, ICD may be able to achieve computational performance competitive with simultaneous update algorithms currently used for MBIR.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Niño , Computadores , Bases de Datos Factuales , Humanos , Factores de Tiempo , Tomografía Computarizada por Rayos X
20.
Med Phys ; 46(10): 4563-4574, 2019 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-31396974

RESUMEN

PURPOSE: An important challenge for deep learning models is generalizing to new datasets that may be acquired with acquisition protocols different from the training set. It is not always feasible to expand training data to the range encountered in clinical practice. We introduce a new technique, physics-based data augmentation (PBDA), that can emulate new computed tomography (CT) data acquisition protocols. We demonstrate two forms of PBDA, emulating increases in slice thickness and reductions of dose, on the specific problem of false-positive reduction in the automatic detection of lung nodules. METHODS: We worked with CT images from the lung image database consortium (LIDC) collection. We employed a hybrid ensemble convolutional neural network (CNN), which consists of multiple CNN modules (VGG, DenseNet, ResNet), for a classification task of determining whether an image patch was a suspicious nodule or a false positive. To emulate a reduction in tube current, we injected noise by simulating forward projection, noise addition, and backprojection corresponding to 1.5 mAs (a "chest x-ray" dose). To simulate thick slice CT scans from thin slice CT scans, we grouped and averaged spatially contiguous CT within thin slice data. The neural network was trained with 10% of the LIDC dataset that was selected to have either the highest tube current or the thinnest slices. The network was tested on the remaining data. We compared PBDA to a baseline with standard geometric augmentations (such as shifts and rotations) and Gaussian noise addition. RESULTS: PBDA improved the performance of the networks when generalizing to the test dataset in a limited number of cases. We found that the best performance was obtained by applying augmentation at very low doses (1.5 mAs), about an order of magnitude less than most screening protocols. In the baseline augmentation, a comparable level of Gaussian noise was injected. For dose reduction PBDA, the average sensitivity of 0.931 for the hybrid ensemble network was not statistically different from the average sensitivity of 0.935 without PBDA. Similarly for slice thickness PBDA, the average sensitivity of 0.900 when augmenting with doubled simulated slice thicknesses was not statistically different from the average sensitivity of 0.895 without PBDA. While there were cases detailed in this paper in which we observed improvements, the overall picture was one that suggests PBDA may not be an effective data enrichment tool. CONCLUSIONS: PBDA is a newly proposed strategy for mitigating the performance loss of neural networks related to the variation of acquisition protocol between the training dataset and the data that is encountered in deployment or testing. We found that PBDA does not provide robust improvements with the four neural networks (three modules and the ensemble) tested and for the specific task of false-positive reduction in nodule detection.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagen , Tomografía Computarizada por Rayos X , Reacciones Falso Positivas , Humanos , Distribución Normal , Dosis de Radiación , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA