Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
1.
Med Phys ; 2024 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-38353632

RESUMO

BACKGROUND: Digital subtraction angiography (DSA) is a fluoroscopy method primarily used for the diagnosis of cardiovascular diseases (CVDs). Deep learning-based DSA (DDSA) is developed to extract DSA-like images directly from fluoroscopic images, which helps in saving dose while improving image quality. It can also be applied where C-arm or patient motion is present and conventional DSA cannot be applied. However, due to the lack of clinical training data and unavoidable artifacts in DSA targets, current DDSA models still cannot satisfactorily display specific structures, nor can they predict noise-free images. PURPOSE: In this study, we propose a strategy for producing abundant synthetic DSA image pairs in which synthetic DSA targets are free of typical artifacts and noise commonly found in conventional DSA targets for DDSA model training. METHODS: More than 7,000 forward-projected computed tomography (CT) images and more than 25,000 synthetic vascular projection images were employed to create contrast-enhanced fluoroscopic images and corresponding DSA images, which were utilized as DSA image pairs for training of the DDSA networks. The CT projection images and vascular projection images were generated from eight whole-body CT scans and 1,584 3D vascular skeletons, respectively. All vessel skeletons were generated with stochastic Lindenmayer systems. We trained DDSA models on this synthetic dataset and compared them to the trainings on a clinical DSA dataset, which contains nearly 4,000 fluoroscopic x-ray images obtained from different models of C-arms. RESULTS: We evaluated DDSA models on clinical fluoroscopic data of different anatomies, including the leg, abdomen, and heart. The results on leg data showed for different methods that training on synthetic data performed similarly and sometimes outperformed training on clinical data. The results on abdomen and cardiac data demonstrated that models trained on synthetic data were able to extract clearer DSA-like images than conventional DSA and models trained on clinical data. The models trained on synthetic data consistently outperformed their clinical data counterparts, achieving higher scores in the quantitative evaluation of peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) metrics for DDSA images, as well as accuracy, precision, and Dice scores for segmentation of the DDSA images. CONCLUSIONS: We proposed an approach to train DDSA networks with synthetic DSA image pairs and extract DSA-like images from contrast-enhanced x-ray images directly. This is a potential tool to aid in diagnosis.

2.
Med Phys ; 50(9): 5312-5330, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37458680

RESUMO

BACKGROUND: Vascular diseases are often treated minimally invasively. The interventional material (stents, guidewires, etc.) used during such percutaneous interventions are visualized by some form of image guidance. Today, this image guidance is usually provided by 2D X-ray fluoroscopy, that is, a live 2D image. 3D X-ray fluoroscopy, that is, a live 3D image, could accelerate existing and enable new interventions. However, existing algorithms for the 3D reconstruction of interventional material require either too many X-ray projections and therefore dose, or are only capable of reconstructing single, curvilinear structures. PURPOSE: Using only two new X-ray projections per 3D reconstruction, we aim to reconstruct more complex arrangements of interventional material than was previously possible. METHODS: This is achieved by improving a previously presented deep learning-based reconstruction pipeline, which assumes that the X-ray images are acquired by a continuously rotating biplane system, in two ways: (a) separation of the reconstruction of different object types, (b) motion compensation using spatial transformer networks. RESULTS: Our pipeline achieves submillimeter accuracy on measured data of a stent and two guidewires inside an anthropomorphic phantom with respiratory motion. In an ablation study, we find that the aforementioned algorithmic changes improve our two figures of merit by 75 % (1.76 mm → 0.44 mm) and 59 % (1.15 mm → 0.47 mm) respectively. A comparison of our measured dose area product (DAP) rate to DAP rates of 2D fluoroscopy indicates a roughly similar dose burden. CONCLUSIONS: This dose efficiency combined with the ability to reconstruct complex arrangements of interventional material makes the presented algorithm a promising candidate to enable 3D fluoroscopy.


Assuntos
Imageamento Tridimensional , Stents , Imageamento Tridimensional/métodos , Raios X , Fluoroscopia/métodos , Imagens de Fantasmas , Algoritmos
3.
Med Phys ; 49(7): 4391-4403, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35421263

RESUMO

PURPOSE: Modern CT scanners use automatic exposure control (AEC) techniques, such as tube current modulation (TCM), to reduce dose delivered to patients while maintaining image quality. In contrast to conventional approaches that minimize the tube current time product of the CT scan, referred to as mAsTCM in the following, we herein propose a new method referred to as riskTCM, which aims at reducing the radiation risk to the patient by taking into account the specific radiation risk of every dose-sensitive organ. METHODS: For current mAsTCM implementations, the mAs product is used as a surrogate for the patient dose. Thus, they do not take into account the varying dose sensitivity of different organs. Our riskTCM framework assumes that a coarse CT reconstruction, an organ segmentation, and an estimation of the dose distribution can be provided in real time, for example, by applying machine learning techniques. Using this information, riskTCM determines a tube current curve that minimizes a patient risk measure, for example, the effective dose, while keeping the image quality constant. We retrospectively applied riskTCM to 20 patients covering all relevant anatomical regions and tube voltages from 70 to 150 kV. The potential reduction of effective dose at same image noise is evaluated as a figure of merit and compared to mAsTCM and to a situation with a constant tube current referred to as noTCM. RESULTS: Anatomical regions like the neck, thorax, abdomen, and the pelvis benefit from the proposed riskTCM. On average, a reduction of effective dose of about 23% for the thorax, 31% for the abdomen, 24% for the pelvis, and 27% for the neck has been evaluated compared to today's state-of-the-art mAsTCM. For the head, the resulting reduction of effective dose is lower, about 13% on average compared to mAsTCM. CONCLUSIONS: With a risk-minimizing TCM, significant higher reduction of effective dose compared to mAs-minimizing TCM is possible.


Assuntos
Tomografia Computadorizada por Raios X , Humanos , Imagens de Fantasmas , Doses de Radiação , Estudos Retrospectivos , Tomógrafos Computadorizados , Tomografia Computadorizada por Raios X/efeitos adversos , Tomografia Computadorizada por Raios X/métodos
4.
Med Phys ; 48(10): 5837-5850, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34387362

RESUMO

PURPOSE: Image guidance for minimally invasive interventions is usually performed by acquiring fluoroscopic images using a monoplanar or a biplanar C-arm system. However, the projective data provide only limited information about the spatial structure and position of interventional tools and devices such as stents, guide wires, or coils. In this work, we propose a deep learning-based pipeline for real-time tomographic (four-dimensional [4D]) interventional guidance at conventional dose levels. METHODS: Our pipeline is comprised of two steps. In the first one, interventional tools are extracted from four cone-beam CT projections using a deep convolutional neural network. These projections are then Feldkamp reconstructed and fed into a second network, which is trained to segment the interventional tools and devices in this highly undersampled reconstruction. Both networks are trained using simulated CT data and evaluated on both simulated data and C-arm cone-beam CT measurements of stents, coils, and guide wires. RESULTS: The pipeline is capable of reconstructing interventional tools from only four X-ray projections without the need for a patient prior. At an isotropic voxel size of 100 µ m , our methods achieve a precision/recall within a 100 µ m environment of the ground truth of 93%/98%, 90%/71%, and 93%/76% for guide wires, stents, and coils, respectively. CONCLUSIONS: A deep learning-based approach for 4D interventional guidance is able to overcome the drawbacks of today's interventional guidance by providing full spatiotemporal (4D) information about the interventional tools at dose levels comparable to conventional fluoroscopy.


Assuntos
Aprendizado Profundo , Tomografia Computadorizada de Feixe Cônico , Fluoroscopia , Humanos , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Tomografia Computadorizada por Raios X , Raios X
5.
Med Phys ; 46(1): 238-249, 2019 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-30390295

RESUMO

PURPOSE: X-ray scattering leads to CT images with a reduced contrast, inaccurate CT values as well as streak and cupping artifacts. Therefore, scatter correction is crucial to maintain the diagnostic value of CT and CBCT examinations. However, existing approaches are not able to combine both high accuracy and high computational performance. Therefore, we propose the deep scatter estimation (DSE): a deep convolutional neural network to derive highly accurate scatter estimates in real time. METHODS: Gold standard scatter estimation approaches rely on dedicated Monte Carlo (MC) photon transport codes. However, being computationally expensive, MC methods cannot be used routinely. To enable real-time scatter correction with similar accuracy, DSE uses a deep convolutional neural network that is trained to predict MC scatter estimates based on the acquired projection data. Here, the potential of DSE is demonstrated using simulations of CBCT head, thorax, and abdomen scans as well as measurements at an experimental table-top CBCT. Two conventional computationally efficient scatter estimation approaches were implemented as reference: a kernel-based scatter estimation (KSE) and the hybrid scatter estimation (HSE). RESULTS: The simulation study demonstrates that DSE generalizes well to varying tube voltages, varying noise levels as well as varying anatomical regions as long as they are appropriately represented within the training data. In any case the deviation of the scatter estimates from the ground truth MC scatter distribution is less than 1.8% while it is between 6.2% and 293.3% for HSE and between 11.2% and 20.5% for KSE. To evaluate the performance for real data, measurements of an anthropomorphic head phantom were performed. Errors were quantified by a comparison to a slit scan reconstruction. Here, the deviation is 278 HU (no correction), 123 HU (KSE), 65 HU (HSE), and 6 HU (DSE), respectively. CONCLUSIONS: The DSE clearly outperforms conventional scatter estimation approaches in terms of accuracy. DSE is nearly as accurate as Monte Carlo simulations but is superior in terms of speed (≈10 ms/projection) by orders of magnitude.


Assuntos
Anatomia , Tomografia Computadorizada de Feixe Cônico , Processamento de Imagem Assistida por Computador/métodos , Doses de Radiação , Espalhamento de Radiação , Artefatos , Humanos , Método de Monte Carlo , Imagens de Fantasmas , Razão Sinal-Ruído
6.
Med Phys ; 45(10): 4541-4557, 2018 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-30098038

RESUMO

PURPOSE: The purpose of this study was to establish a novel paradigm to facilitate radiologists' workflow - combining mutually exclusive CT image properties that emerge from different reconstructions, display settings and organ-dependent spectral evaluation methods into a single context-sensitive imaging by exploiting prior anatomical information. METHODS: The CT dataset is segmented and classified into different organs, for example, the liver, left and right kidney, spleen, aorta, and left and right lung as well as into the tissue types bone, fat, soft tissue, and vessels using a cascaded three-dimensional fully convolutional neural network (CNN) consisting of two successive 3D U-nets. The binary organ and tissue masks are transformed to tissue-related weighting coefficients that are used to allow individual organ-specific parameter settings in each anatomical region. Exploiting the prior knowledge, we develop a novel paradigm of a context-sensitive (CS) CT imaging consisting of a prior-based spatial resolution (CSR), display (CSD), and dual energy evaluation (CSDE). The CSR locally emphasizes desired image properties. On a per-voxel basis, the reconstruction most suitable for the organ, tissue type, and clinical indication is chosen automatically. Furthermore, an organ-specific windowing and display method is introduced that aims at providing superior image visualization. The CSDE analysis allows to simultaneously evaluate multiple organs and to show organ-specific DE overlays wherever appropriate. The ROIs that are required for a patient-specific calibration of the algorithms are automatically placed into the corresponding anatomical structures. The DE applications are selected and only applied to the specific organs based on the prior knowledge. The approach is evaluated using patient data acquired with a dual source CT system. The final CS images simultaneously link the indication-specific advantages of different parameter settings and result in images combining tissue-related desired image properties. RESULTS: A comparison with conventionally reconstructed images reveals an improved spatial resolution in highly attenuating objects and in air while the compound image maintains a low noise level in soft tissue. Furthermore, the tissue-related weighting coefficients allow for the combination of varying settings into one novel image display. We are, in principle, able to automate and standardize the spectral analysis of the DE data using prior anatomical information. Each tissue type is evaluated with its corresponding DE application simultaneously. CONCLUSION: This work provides a proof of concept of CS imaging. Since radiologists are not aware of the presented method and the tool is not yet implemented in everyday clinical practice, a comprehensive clinical evaluation in a large cohort might be topic of future research. Nonetheless, the presented method has potential to facilitate workflow in clinical routine and could potentially improve diagnostic accuracy by improving sensitivity for incidental findings. It is a potential step toward the presentation of evermore increasingly complex information in CT and toward improving the radiologists workflow significantly since dealing with multiple CT reconstructions may no longer be necessary. The method can be readily generalized to multienergy data and also to other modalities.


Assuntos
Tomografia Computadorizada por Raios X/métodos , Algoritmos , Calibragem , Humanos , Processamento de Imagem Assistida por Computador , Especificidade de Órgãos , Imagens de Fantasmas
7.
Med Phys ; 2018 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-29938797

RESUMO

PURPOSE: In image-guided radiation therapy, fiducial markers or clips are often used to determine the position of the tumor. These markers lead to streak artifacts in cone-beam CT (CBCT) scans. Standard inpainting-based metal artifact reduction (MAR) methods fail to remove these artifacts in cases of large motion. We propose two methods to effectively reduce artifacts caused by moving metal inserts. METHODS: The first method (MMAR) utilizes a coarse metal segmentation in the image domain and a refined segmentation in the rawdata domain. After an initial reconstruction, metal is segmented and forward projected giving a coarse metal mask in the rawdata domain. Inside the coarse mask, metal is segmented by utilizing a 2D Sobel filter. Metal is removed by linear interpolation in the refined metal mask. The second method (MoCoMAR) utilizes a motion compensation (MoCo) algorithm [Med Phys. 2013;40:101913] that provides us with a motion-free volume (3D) or with a time series of motion-free volumes (4D). We then apply the normalized metal artifact reduction (NMAR) [Med Phys. 2010;37:5482-5493] to these MoCo volumes. Both methods were applied to three CBCT data sets of patients with metal inserts in the thorax or abdomen region and a 4D thorax simulation. The results were compared to volumes corrected by a standard MAR1 [Radiology. 1987;164:576-577]. RESULTS: MMAR and MoCoMAR were able to remove all artifacts caused by moving metal inserts for the patients and the simulation. Both new methods outperformed the standard MAR1, which was only able to remove artifacts caused by metal inserts with little or no motion. CONCLUSIONS: In this work, two new methods to remove artifacts caused by moving metal inserts are introduced. Both methods showed good results for a simulation and three patients. While the first method (MMAR) works without any prior knowledge, the second method (MoCoMAR) requires a respiratory signal for the MoCo step and is computationally more demanding and gives no benefit over MMAR, unless MoCo images are desired.

8.
Z Med Phys ; 27(3): 180-192, 2017 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-28522170

RESUMO

PURPOSE: Optimization of the AIR-algorithm for improved convergence and performance. METHODS: The AIR method is an iterative algorithm for CT image reconstruction. As a result of its linearity with respect to the basis images, the AIR algorithm possesses well defined, regular image quality metrics, e.g. point spread function (PSF) or modulation transfer function (MTF), unlike other iterative reconstruction algorithms. The AIR algorithm computes weighting images α to blend between a set of basis images that preferably have mutually exclusive properties, e.g. high spatial resolution or low noise. The optimized algorithm uses an approach that alternates between the optimization of rawdata fidelity using an OSSART like update and regularization using gradient descent, as opposed to the initially proposed AIR using a straightforward gradient descent implementation. A regularization strength for a given task is chosen by formulating a requirement for the noise reduction and checking whether it is fulfilled for different regularization strengths, while monitoring the spatial resolution using the voxel-wise defined modulation transfer function for the AIR image. RESULTS: The optimized algorithm computes similar images in a shorter time compared to the initial gradient descent implementation of AIR. The result can be influenced by multiple parameters that can be narrowed down to a relatively simple framework to compute high quality images. The AIR images, for instance, can have at least a 50% lower noise level compared to the sharpest basis image, while the spatial resolution is mostly maintained. CONCLUSIONS: The optimization improves performance by a factor of 6, while maintaining image quality. Furthermore, it was demonstrated that the spatial resolution for AIR can be determined using regular image quality metrics, given smooth weighting images. This is not possible for other iterative reconstructions as a result of their non linearity. A simple set of parameters for the algorithm is discussed that provides the mentioned results.


Assuntos
Algoritmos , Tomografia Computadorizada por Raios X/normas , Abdome/diagnóstico por imagem , Cabeça/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Cintilografia , Tórax/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos
9.
Med Phys ; 43(5): 2303, 2016 May.
Artigo em Inglês | MEDLINE | ID: mdl-27147342

RESUMO

PURPOSE: CT reconstruction requires an angular coverage of 180° or more for each point within the field of measurement. Thus, common trajectories use a 180° plus fan angle rotation. This is sometimes combined with a translation of the rotational isocenter in order to achieve circular trajectories with an isocenter different from the mechanical rotation center or elliptical trajectories. Rays measured redundantly are appropriately weighted. In case of an angular coverage smaller than 180°, the reconstructed images suffer from limited angle artifacts. In mechanical constructions with a rotation range limited to less than 180° plus fan angle, the angular coverage can be extended by adding one or two shifts to the rotational motion. If the missing angle is less than the fan angle, the shifts can completely compensate for the limited rotational capabilities. METHODS: The authors give weight functions that can be viewed as generalized Parker weights, which can be applied to the raw data before image reconstruction. Raw data of Forbild phantoms using the rotate-plus-shift trajectory are simulated with the geometry of a typical mobile flat detector-based C-arm system. Filtered backprojection (FBP) reconstructions using the new redundancy weight are performed and compared to FBP reconstructions of limited angle scans as well as short-scan reference trajectories using Parker weight. RESULTS: The new weighting method is exact in 2D, and for 3D Feldkamp-type reconstructions, it is exact in the mid-plane. The proposed weight shows a mathematically exact match with Parker weight for conventional short-scan trajectories. Reconstructions of rotate-plus-shift trajectories using the new weight do not suffer from limited angle artifacts, whereas scans limited to less than 180° without shift show prominent artifacts. Image noise in rotate-plus-shift scans is comparable to that of corresponding short scans. CONCLUSIONS: The new weight function enables the straightforward reconstruction using filtered backprojection of data acquired with the rotate-plus-shift C-arm trajectory and a large variety of other advanced trajectories.


Assuntos
Tomografia Computadorizada de Feixe Cônico/instrumentação , Tomografia Computadorizada de Feixe Cônico/métodos , Imageamento Tridimensional/instrumentação , Imageamento Tridimensional/métodos , Artefatos , Simulação por Computador , Desenho de Equipamento , Cabeça/diagnóstico por imagem , Humanos , Modelos Anatômicos , Imagens de Fantasmas , Rotação
10.
Med Phys ; 42(1): 469-78, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25563286

RESUMO

PURPOSE: Scattered radiation is one of the major problems facing image quality in flat detector cone-beam computed tomography (CBCT). Previously, a new scatter estimation and correction method using primary beam modulation has been proposed. The original image processing technique used a frequency-domain-based analysis, which proved to be sensitive to the accuracy of the modulator pattern both spatially and in amplitude as well as to the frequency of the modulation pattern. In addition, it cannot account for penumbra effects that occur, for example, due to the finite focal spot size and the scatter estimate can be degraded by high-frequency components of the primary image. METHODS: In this paper, the authors present a new way to estimate the scatter using primary modulation. It is less sensitive to modulator nonidealities and most importantly can handle arbitrary modulator shapes and changes in modulator attenuation. The main idea is that the scatter estimation can be expressed as an optimization problem, which yields a separation of the scatter and the primary image. The method is evaluated using simulated and experimental CBCT data. The scattering properties of the modulator itself are analyzed using a Monte Carlo simulation. RESULTS: All reconstructions show strong improvements of image quality. To quantify the results, all images are compared to reference images (ideal simulations and collimated scans). CONCLUSIONS: The proposed modulator-based scatter reduction algorithm may open the field of flat detector-based imaging to become a quantitative modality. This may have significant impact on C-arm imaging and on image-guided radiation therapy.


Assuntos
Algoritmos , Tomografia Computadorizada de Feixe Cônico/métodos , Espalhamento de Radiação , Simulação por Computador , Tomografia Computadorizada de Feixe Cônico/instrumentação , Cabeça/diagnóstico por imagem , Humanos , Pulmão/diagnóstico por imagem , Modelos Teóricos , Método de Monte Carlo , Imagens de Fantasmas
11.
Med Phys ; 41(6): 061914, 2014 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-24877825

RESUMO

PURPOSE: Iterative image reconstruction gains more and more interest in clinical routine, as it promises to reduce image noise (and thereby patient dose), to reduce artifacts, or to improve spatial resolution. Among vendors and researchers, however, there is no consensus of how to best achieve these aims. The general approach is to incorporate a priori knowledge into iterative image reconstruction, for example, by adding additional constraints to the cost function, which penalize variations between neighboring voxels. However, this approach to regularization in general poses a resolution noise trade-off because the stronger the regularization, and thus the noise reduction, the stronger the loss of spatial resolution and thus loss of anatomical detail. The authors propose a method which tries to improve this trade-off. The proposed reconstruction algorithm is called alpha image reconstruction (AIR). One starts with generating basis images, which emphasize certain desired image properties, like high resolution or low noise. The AIR algorithm reconstructs voxel-specific weighting coefficients that are applied to combine the basis images. By combining the desired properties of each basis image, one can generate an image with lower noise and maintained high contrast resolution thus improving the resolution noise trade-off. METHODS: All simulations and reconstructions are performed in native fan-beam geometry. A water phantom with resolution bar patterns and low contrast disks is simulated. A filtered backprojection (FBP) reconstruction with a Ram-Lak kernel is used as a reference reconstruction. The results of AIR are compared against the FBP results and against a penalized weighted least squares reconstruction which uses total variation as regularization. The simulations are based on the geometry of the Siemens Somatom Definition Flash scanner. To quantitatively assess image quality, the authors analyze line profiles through resolution patterns to define a contrast factor for contrast-resolution plots. Furthermore, the authors calculate the contrast-to-noise ratio with the low contrast disks and the authors compare the agreement of the reconstructions with the ground truth by calculating the normalized cross-correlation and the root-mean-square deviation. To evaluate the clinical performance of the proposed method, the authors reconstruct patient data acquired with a Somatom Definition Flash dual source CT scanner (Siemens Healthcare, Forchheim, Germany). RESULTS: The results of the simulation study show that among the compared algorithms AIR achieves the highest resolution and the highest agreement with the ground truth. Compared to the reference FBP reconstruction AIR is able to reduce the relative pixel noise by up to 50% and at the same time achieve a higher resolution by maintaining the edge information from the basis images. These results can be confirmed with the patient data. CONCLUSIONS: To evaluate the AIR algorithm simulated and measured patient data of a state-of-the-art clinical CT system were processed. It is shown, that generating CT images through the reconstruction of weighting coefficients has the potential to improve the resolution noise trade-off and thus to improve the dose usage in clinical CT.


Assuntos
Algoritmos , Tomografia Computadorizada por Raios X/métodos , Artefatos , Simulação por Computador , Humanos , Análise dos Mínimos Quadrados , Imagens de Fantasmas , Tomografia Computadorizada por Raios X/instrumentação , Água
12.
Med Phys ; 41(2): 021907, 2014 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-24506628

RESUMO

PURPOSE: Iterative image reconstruction gains more and more interest in clinical routine, as it promises to reduce image noise (and thereby patient dose), to reduce artifacts, or to improve spatial resolution. However, among vendors and researchers, there is no consensus of how to best achieve these goals. The authors are focusing on the aspect of geometric ray profile modeling, which is realized by some algorithms, while others model the ray as a straight line. The authors incorporate ray-modeling (RM) in nonregularized iterative reconstruction. That means, instead of using one simple single needle beam to represent the x-ray, the authors evaluate the double integral of attenuation path length over the finite source distribution and the finite detector element size in the numerical forward projection. Our investigations aim at analyzing the resolution recovery (RR) effects of RM. Resolution recovery means that frequencies can be recovered beyond the resolution limit of the imaging system. In order to evaluate, whether clinical CT images can benefit from modeling the geometrical properties of each x-ray, the authors performed a 2D simulation study of a clinical CT fan-beam geometry that includes the precise modeling of these geometrical properties. METHODS: All simulations and reconstructions are performed in native fan-beam geometry. A water phantom with resolution bar patterns and a Forbild thorax phantom with circular resolution patterns representing calcifications in the heart region are simulated. An FBP reconstruction with a Ram-Lak kernel is used as a reference reconstruction. The FBP is compared to iterative reconstruction techniques with and without RM: An ordered subsets convex (OSC) algorithm without any RM (OSC), an OSC where the forward projection is modeled concerning the finite focal spot and detector size (OSC-RM) and an OSC with RM and with a matched forward and backprojection pair (OSC-T-RM, T for transpose). In all cases, noise was matched to be able to focus on comparing spatial resolution. The authors use two different simulation settings. Both are based on the geometry of a typical clinical CT system (0.7 mm detector element size at isocenter, 1024 projections per rotation). Setting one has an exaggerated source width of 5.0 mm. Setting two has a realistically small source width of 0.5 mm. The authors also investigate the transition from setting one to two. To quantify image quality, the authors analyze line profiles through the resolution patterns to define a contrast factor (CF) for contrast-resolution plots, and the authors compare the normalized cross-correlation (NCC) with respect to the ground truth of the circular resolution patterns. To independently analyze whether RM is of advantage, the authors implemented several iterative reconstruction algorithms: The statistical iterative reconstruction algorithm OSC, the ordered subsets simultaneous algebraic reconstruction technique (OSSART) and another statistical iterative reconstruction algorithm, denoted with ordered subsets maximum likelihood (OSML) algorithm. All algorithms were implemented both without RM (denoted as OSC, OSSART, and OSML) and with RM (denoted as OSC-RM, OSSART-RM, and OSML-RM). RESULTS: For the unrealistic case of a 5.0 mm focal spot the CF can be improved by a factor of two due to RM: the 4.2 LP/cm bar pattern, which is the first bar pattern that cannot be resolved without RM, can be easily resolved with RM. For the realistic case of a 0.5 mm focus, all results show approximately the same CF. The NCC shows no significant dependency on RM when the source width is smaller than 2.0 mm (as in clinical CT). From 2.0 mm to 5.0 mm focal spot size increasing improvements can be observed with RM. CONCLUSIONS: Geometric RM in iterative reconstruction helps improving spatial resolution, if the ray cross-section is significantly larger than the ray sampling distance. In clinical CT, however, the ray is not much thicker than the distance between neighboring ray centers, as the focal spot size is small and detector crosstalk is negligible, due to reflective coatings between detector elements. Therefore,RM appears not to be necessary in clinical CT to achieve resolution recovery.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Modelos Teóricos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Humanos , Imagens de Fantasmas , Radiografia Torácica
13.
Med Phys ; 39(12): 7499-506, 2012 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-23231299

RESUMO

PURPOSE: Mouse models of cardiac diseases have proven to be a valuable tool in preclinical research. The high cardiac and respiratory rates of free breathing mice prohibit conventional in vivo cardiac perfusion studies using computed tomography even if gating methods are applied. This makes a sacrification of the animals unavoidable and only allows for the application of ex vivo methods. METHODS: To overcome this issue the authors propose a low dose scan protocol and an associated reconstruction algorithm that allows for in vivo imaging of cardiac perfusion and associated processes that are retrospectively synchronized to the respiratory and cardiac motion of the animal. The scan protocol consists of repetitive injections of contrast media within several consecutive scans while the ECG, respiratory motion, and timestamp of contrast injection are recorded and synchronized to the acquired projections. The iterative reconstruction algorithm employs a six-dimensional edge-preserving filter to provide low-noise, motion artifact-free images of the animal examined using the authors' low dose scan protocol. RESULTS: The reconstructions obtained show that the complete temporal bolus evolution can be visualized and quantified in any desired combination of cardiac and respiratory phase including reperfusion phases. The proposed reconstruction method thereby keeps the administered radiation dose at a minimum and thus reduces metabolic inference to the animal allowing for longitudinal studies. CONCLUSIONS: The authors' low dose scan protocol and phase-correlated dynamic reconstruction algorithm allow for an easy and effective way to visualize phase-correlated perfusion processes in routine laboratory studies using free-breathing mice.


Assuntos
Técnicas de Imagem de Sincronização Cardíaca/veterinária , Circulação Coronária/fisiologia , Vasos Coronários/fisiopatologia , Imagem de Perfusão do Miocárdio/veterinária , Técnicas de Imagem de Sincronização Respiratória/veterinária , Microtomografia por Raio-X/veterinária , Animais , Velocidade do Fluxo Sanguíneo/fisiologia , Técnicas de Imagem de Sincronização Cardíaca/métodos , Camundongos , Imagem de Perfusão do Miocárdio/métodos , Reprodutibilidade dos Testes , Mecânica Respiratória , Técnicas de Imagem de Sincronização Respiratória/métodos , Sensibilidade e Especificidade , Microtomografia por Raio-X/métodos
14.
Med Phys ; 39(9): 5384-92, 2012 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-22957606

RESUMO

PURPOSE: The authors propose a novel method for misalignment estimation of micro-CT scanners using an adaptive genetic algorithm. METHODS: The proposed algorithm is able to estimate the rotational geometry, the direction vector of table movement and the displacement between different imaging threads of a dual source or even multisource scanner. The calibration procedure does not rely on dedicated calibration phantoms and a sequence scan of a single metal bead is sufficient to geometrically calibrate the whole imaging system for spiral, sequential, and circular scan protocols. Dual source spiral and sequential scan protocols in micro-computed tomography result in projection data that-besides the source and detector positions and orientations-also require a precise knowledge of the table direction vector to be reconstructed properly. If those geometric parameters are not known accurately severe artifacts and a loss in spatial resolution appear in the reconstructed images as long as no geometry calibration is performed. The table direction vector is further required to ensure that consecutive volumes of a sequence scan can be stitched together and to allow the reconstruction of spiral data at all. RESULTS: The algorithm's performance is evaluated using simulations of a micro-CT system with known geometry and misalignment. To assess the quality of the algorithm in a real world scenario the calibration of a micro-CT scanner is performed and several reconstructions with and without geometry estimation are presented. CONCLUSIONS: The results indicate that the algorithm successfully estimates all geometry parameters, misalignment artifacts in the reconstructed volumes vanish, and the spatial resolution is increased as can be shown by the evaluation of modulation transfer function measurements.


Assuntos
Tomografia Computadorizada Espiral/métodos , Microtomografia por Raio-X/métodos , Algoritmos , Calibragem , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas
15.
Phys Med Biol ; 57(6): 1517-25, 2012 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-22391045

RESUMO

Temporal-correlated image reconstruction, also known as 4D CT image reconstruction, is a big challenge in computed tomography. The reasons for incorporating the temporal domain into the reconstruction are motions of the scanned object, which would otherwise lead to motion artifacts. The standard method for 4D CT image reconstruction is extracting single motion phases and reconstructing them separately. These reconstructions can suffer from undersampling artifacts due to the low number of used projections in each phase. There are different iterative methods which try to incorporate some a priori knowledge to compensate for these artifacts. In this paper we want to follow this strategy. The cost function we use is a higher dimensional cost function which accounts for the sparseness of the measured signal in the spatial and temporal directions. This leads to the definition of a higher dimensional total variation. The method is validated using in vivo cardiac micro-CT mouse data. Additionally, we compare the results to phase-correlated reconstructions using the FDK algorithm and a total variation constrained reconstruction, where the total variation term is only defined in the spatial domain. The reconstructed datasets show strong improvements in terms of artifact reduction and low-contrast resolution compared to other methods. Thereby the temporal resolution of the reconstructed signal is not affected.


Assuntos
Tomografia Computadorizada Quadridimensional/estatística & dados numéricos , Algoritmos , Animais , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Camundongos , Modelos Estatísticos , Microtomografia por Raio-X/estatística & dados numéricos
16.
Med Phys ; 38(6): 2868-78, 2011 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-21815362

RESUMO

PURPOSE: In classical x-ray CT, the diameter of the field of measurement (FOM) must not fall below the transversal diameter of the patient or specimen. Thereby, the ratio of the diameter of FOM and the number of transversal detector elements typically defines the spatial resolution. The authors aim at improving the spatial resolution within a region of interest (ROI) by a factor of 10-100 while maintaining artifact-free CT image reconstruction inside and outside the ROI. Two novel methods are proposed for artifact-free reconstruction of the truncated ROI scan (data weighting method and data filtering method) and compared with the gold standard (data completion method) for this problem. METHODS: First, an overview scan with low spatial resolution and a large FOM that exceeds the object transversally is performed. Second, a high-resolution scan is performed, where the scanner's magnification is changed such that the FOM matches the ROI at the cost of laterally truncated projection data. The gold standard is forward projecting the low-resolution scan on the rays missing in the high-resolution scan. The authors propose the data filtering method, which uses the low-resolution reconstruction and calculates a high frequency correction term from the high-resolution scan, and the data weighting method, which reconstructs the truncated high-resolution data and calculates a detruncation image from the low-resolution data. RESULTS: The methods are compared using a simulation of the Forbild head phantom and a measurement of a spinal disk implant. The results of the data weighting method and the data completion method show the same image quality. The data filtering method yields slightly inferior image quality that may still be sufficient for many applications. Both new methods considerably outperform the data completion method regarding the computational load. CONCLUSIONS: The new ROI reconstruction methods are superior to the gold standard regarding the computational load. Comparing the image quality with the gold standard, the data filtering method is slightly inferior and the data weighting method yields equal quality.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Cabeça/diagnóstico por imagem , Disco Intervertebral/diagnóstico por imagem , Imagens de Fantasmas , Próteses e Implantes
17.
Comput Methods Programs Biomed ; 98(3): 253-60, 2010 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-19765852

RESUMO

Cone-beam spiral backprojection is computationally highly demanding. At first sight, the backprojection requirements are similar to those of cone-beam backprojection from circular scans such as it is performed in the widely used Feldkamp algorithm. However, there is an additional complication: the illumination of each voxel, i.e. the range of angles the voxel is seen by the X-ray cone is a complex function of the voxel position. The weight function has no analytically closed form and must be numerically determined. Storage of the weights is prohibitive since the amount of memory required equals the number of voxels per spiral rotation times the number of projections a voxel receives contributions and therefore is in the order of 10(9) to 10(11) floating point values for typical spiral scans. We propose a new algorithm that combines the spiral symmetry with the ability of today's 64 bit CPUs to store large amounts of precomputed weights. Using the spiral symmetry in this way allows to exploit data-level parallelism and thereby to achieve a very high level of vectorization. An additional postprocessing step rotates these slices back to normal images. Our new backprojection algorithm achieves up to 24.6 Giga voxel updates per second (GUPS) on our systems that are equipped with two standard Intel X5570 quad core CPUs (Intel Xeon 5500 platform, 2.93 GHz, Intel Corporation). This equals the reconstruction of 410 images per second assuming each slice consists of 512 x 512 pixels, receiving contributions from 512 projections.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada de Feixe Cônico Espiral , Fatores de Tempo
18.
Phys Med Biol ; 54(12): 3691-708, 2009 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-19478378

RESUMO

Cone-beam spiral backprojection is computationally highly demanding. At first sight, the backprojection requirements are similar to those of cone-beam backprojection from circular scans such as it is performed in the widely used Feldkamp algorithm. However, there is an additional complication: the illumination of each voxel, i.e. the range of angles the voxel is seen by the x-ray cone, is a complex function of the voxel position. In general, one needs to multiply a voxel-specific weight w(x, y, z, alpha) prior to adding a projection from angle alpha to a voxel at position x, y, z. Often, the weight function has no analytically closed form and must be numerically determined. Storage of the weights is prohibitive since the amount of memory required equals the number of voxels per spiral rotation times the number of projections a voxel receives contributions and therefore is in the order of up to 10(12) floating point values for typical spiral scans. We propose a new algorithm that combines the spiral symmetry with the ability of today's 64 bit operating systems to store large amounts of precomputed weights, even above the 4 GB limit. Our trick is to backproject into slices that are rotated in the same manner as the spiral trajectory rotates. Using the spiral symmetry in this way allows one to exploit data-level paralellism and thereby to achieve a very high level of vectorization. An additional postprocessing step rotates these slices back to normal images. Our new backprojection algorithm achieves up to 17 giga voxel updates per second on our systems that are equipped with four standard Intel X7460 hexa core CPUs (Intel Xeon 7300 platform, 2.66 GHz, Intel Corporation). This equals the reconstruction of 344 images per second assuming that each slice consists of 512 x 512 pixels and receives contributions from 512 projections. Thereby, it is an order of magnitude faster than a highly optimized code that does not make use of the spiral symmetry. In its present version, the spiral backprojection algorithm is pixel-driven. A ray-driven version and a corresponding high performance forward projector can be easily designed. Thus, our findings can be used to speed up any type of image reconstruction algorithm (approximate or exact analytical algorithms and iterative algorithms) and therefore yield a versatile and valuable component of future image reconstruction pipelines.


Assuntos
Algoritmos , Imageamento Tridimensional/métodos , Intensificação de Imagem Radiográfica/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada Espiral/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
19.
Med Phys ; 35(12): 5898-909, 2008 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-19175145

RESUMO

The size of the field of measurement (FOM) in computed tomography is limited by the size of the x-ray detector. In general, the detector is mounted symmetrically with respect to the rotation axis such that the transaxial FOM diameter approximately equals the lateral dimensions of the detector when being demagnified to the isocenter. To enlarge the FOM one may laterally shift the detector by up to 50% of its size. Well-known weighting functions must then be applied to the raw data prior to convolution and backprojection. In this case, a full scan or a scan with more than 360 degrees angular coverage is required to obtain complete data. However, there is a small region, the inner FOM, that is covered redundantly and where a partial scan reconstruction may be sufficient. A new weighting function is proposed that allows one to reconstruct partial scans in that inner FOM while it reconstructs full scan or overscan data for the outer FOM, which is the part that contains no redundancies. The presented shifted detector partial scan algorithm achieves a high temporal resolution in the inner FOM while maintaining truncation-free images for the outer part. The partial scan window can be arbitrarily shifted in the angular direction, what corresponds to shifting the temporal window of the data shown in the inner FOM. This feature allows for the reconstruction of dynamic CT data with high temporal resolution. The approach presented here is evaluated using simulated and measured data for a dual source micro-CT scanner with rotating gantry.


Assuntos
Tomografia Computadorizada de Feixe Cônico/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Animais , Simulação por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Camundongos , Modelos Estatísticos , Imagens de Fantasmas , Reprodutibilidade dos Testes , Fatores de Tempo , Tomógrafos Computadorizados
20.
Med Phys ; 34(4): 1474-86, 2007 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-17500478

RESUMO

Tomographic image reconstruction, such as the reconstruction of computed tomography projection values, of tomosynthesis data, positron emission tomography or SPECT events, and of magnetic resonance imaging data is computationally very demanding. One of the most time-consuming steps is the backprojection. Recently, a novel general purpose architecture optimized for distributed computing became available: the cell broadband engine (CBE). To maximize image reconstruction speed we modified our parallel-beam backprojection algorithm [two dimensional (2D)] and our perspective backprojection algorithm [three dimensional (3D), cone beam for flat-panel detectors] and optimized the code for the CBE. The algorithms are pixel or voxel driven, run with floating point accuracy and use linear (LI) or nearest neighbor (NN) interpolation between detector elements. For the parallel-beam case, 512 projections per half rotation, 1024 detector channels, and an image of size 512(2) was used. The cone-beam backprojection performance was assessed by backprojecting a full circle scan of 512 projections of size 1024(2) into a volume of size 512(3) voxels. The field of view was chosen to completely lie within the field of measurement and the pixel or voxel size was set to correspond to the detector element size projected to the center of rotation divided by square root of 2. Both the PC and the CBE were clocked at 3 GHz. For the parallel backprojection of 512 projections into a 512(2) image, a throughput of 11 fps (LI) and 15 fps (NN) was measured on the PC, whereas the CBE achieved 126 fps (LI) and 165 fps (NN), respectively. The cone-beam backprojection of 512 projections into the 512(3) volume took 3.2 min on the PC and is as fast as 13.6 s on the cell. Thereby, the cell greatly outperforms today's top-notch backprojections based on graphical processing units. Using both CBEs of our dual cell-based blade (Mercury Computer Systems) allows to 2D backproject 330 images/s and one can complete the 3D cone-beam backprojection in 6.8 s.


Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/instrumentação , Imageamento Tridimensional/métodos , Processamento de Sinais Assistido por Computador/instrumentação , Tomografia Computadorizada de Emissão/instrumentação , Tomografia Computadorizada Espiral/instrumentação , Desenho de Equipamento , Análise de Falha de Equipamento , Interpretação de Imagem Assistida por Computador/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Tomografia Computadorizada de Emissão/métodos , Tomografia Computadorizada Espiral/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...