Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
1.
Eur Radiol ; 33(8): 5321-5330, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37014409

RESUMO

Since 1971 and Hounsfield's first CT system, clinical CT systems have used scintillating energy-integrating detectors (EIDs) that use a two-step detection process. First, the X-ray energy is converted into visible light, and second, the visible light is converted to electronic signals. An alternative, one-step, direct X-ray conversion process using energy-resolving, photon-counting detectors (PCDs) has been studied in detail and early clinical benefits reported using investigational PCD-CT systems. Subsequently, the first clinical PCD-CT system was commercially introduced in 2021. Relative to EIDs, PCDs offer better spatial resolution, higher contrast-to-noise ratio, elimination of electronic noise, improved dose efficiency, and routine multi-energy imaging. In this review article, we provide a technical introduction to the use of PCDs for CT imaging and describe their benefits, limitations, and potential technical improvements. We discuss different implementations of PCD-CT ranging from small-animal systems to whole-body clinical scanners and summarize the imaging benefits of PCDs reported using preclinical and clinical systems. KEY POINTS: • Energy-resolving, photon-counting-detector CT is an important advance in CT technology. • Relative to current energy-integrating scintillating detectors, energy-resolving, photon-counting-detector CT offers improved spatial resolution, improved contrast-to-noise ratio, elimination of electronic noise, increased radiation and iodine dose efficiency, and simultaneous multi-energy imaging. • High-spatial-resolution, multi-energy imaging using energy-resolving, photon-counting-detector CT has been used in investigations into new imaging approaches, including multi-contrast imaging.


Assuntos
Iodo , Tomografia Computadorizada por Raios X , Animais , Tomografia Computadorizada por Raios X/métodos , Fótons , Raios X , Imagens de Fantasmas
2.
Med Phys ; 51(3): 1822-1831, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37650780

RESUMO

BACKGROUND: Due to technical constraints, dual-source dual-energy CT scans may lack spectral information in the periphery of the patient. PURPOSE: Here, we propose a deep learning-based iterative reconstruction to recover the missing spectral information outside the field of measurement (FOM) of the second source-detector pair. METHODS: In today's Siemens dual-source CT systems, one source-detector pair (referred to as A) typically has a FOM of about 50 cm, while the FOM of the other pair (referred to as B) is limited by technical constraints to a diameter of about 35 cm. As a result, dual-energy applications are currently only available within the small FOM, limiting their use for larger patients. To derive a reconstruction at B's energy for the entire patient cross-section, we propose a deep learning-based iterative reconstruction. Starting with A's reconstruction as initial estimate, it employs a neural network in each iteration to refine the current estimate according to a raw data fidelity measure. Here, the corresponding mapping is trained using simulated chest, abdomen, and pelvis scans based on a data set containing 70 full body CT scans. Finally, the proposed approach is tested on simulated and measured dual-source dual-energy scans and compared against existing reference approaches. RESULTS: For all test cases, the proposed approach was able to provide artifact-free CT reconstructions of B for the entire patient cross-section. Considering simulated data, the remaining error of the reconstructions is between 10 and 17 HU on average, which is about half as low as the reference approaches. A similar performance with an average error of 8 HU could be achieved for real phantom measurements. CONCLUSIONS: The proposed approach is able to recover missing dual-energy information for patients exceeding the small 35 cm FOM of dual-source CT systems. Therefore, it potentially allows to extend dual-energy applications to the entire-patient cross section.


Assuntos
Aprendizado Profundo , Humanos , Tomografia Computadorizada por Raios X , Tórax , Imagens de Fantasmas , Algoritmos , Processamento de Imagem Assistida por Computador
3.
Med Phys ; 51(3): 1597-1616, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38227833

RESUMO

BACKGROUND: Multislice spiral computed tomography (MSCT) requires an interpolation between adjacent detector rows during backprojection. Not satisfying the Nyquist sampling condition along the z-axis results in aliasing effects, also known as windmill artifacts. These image distortions are characterized by bright streaks diverging from high contrast structures. PURPOSE: The z-flying focal spot (zFFS) is a well-established hardware-based solution that aims to double the sampling rate in longitudinal direction and therefore reduce aliasing artifacts. However, given the technical complexity of the zFFS, this work proposes a deep learning-based approach as an alternative solution. METHODS: We propose a supervised learning approach to perform a mapping between input projections and the corresponding rows required for double sampling in the z-direction. We present a comprehensive evaluation using both a clinical dataset obtained using raw data from 40 real patient scans acquired with zFFS and a synthetic dataset consisting of 100 simulated spiral scans using a phantom specifically designed for our problem. For the clinical dataset, we utilized 32 scans as training set and 8 scans as validation set, whereas for the synthetic dataset, we used 80 scans for training and 20 scans for validation purposes. Both qualitative and quantitative assessments are conducted on a test set consisting of nine real patient scans and six phantom measurements to validate the performance of our approach. A simulation study was performed to investigate the robustness against different scan configurations in terms of detector collimation and pitch value. RESULTS: In the quantitative comparison based on clinical patient scans from the test set, all network configurations show an improvement in the root mean square error (RMSE) of approximately 20% compared to neglecting the doubled longitudinal sampling by the zFFS. The results of the qualitative analysis indicate that both clinical and synthetic training data can reduce windmill artifacts through the application of a correspondingly trained network. Together with the qualitative results from the test set phantom measurements it is emphasized that a training of our method with synthetic data resulted in superior performance in windmill artifact reduction. CONCLUSIONS: Deep learning-based raw data interpolation has the potential to enhance the sampling in z-direction and thus minimize aliasing effects, as it is the case with the zFFS. Especially a training with synthetic data showed promising results. While it may not outperform zFFS, our method represents a beneficial solution for CT scanners lacking the necessary hardware components for zFFS.


Assuntos
Artefatos , Aprendizado Profundo , Humanos , Tomografia Computadorizada Espiral/métodos , Tomógrafos Computadorizados , Imagens de Fantasmas , Processamento de Imagem Assistida por Computador/métodos , Algoritmos
4.
Med Phys ; 49(8): 5038-5051, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35722721

RESUMO

PURPOSE: We aim at developing a model-based algorithm that compensates for the effect of both pulse pileup (PP) and charge sharing (CS) and evaluates the performance using computer simulations. METHODS: The proposed PCP algorithm for PP and CS compensation uses cascaded models for CS and PP we previously developed, maximizes Poisson log-likelihood, and uses an efficient three-step exhaustive search. For comparison, we also developed an LCP algorithm that combines models for a loss of counts (LCs) and CS. Two types of computer simulations, slab- and computed tomography (CT)-based, were performed to assess the performance of both PCP and LCP with 200 and 800 mA, (300 µm)2  × 1.6-mm cadmium telluride detector, and a dead-time of 23 ns. A slab-based assessment used a pair of adipose and iodine with different thicknesses, attenuated X-rays, and assessed the bias and noise of the outputs from one detector pixel; a CT-based assessment simulated a chest/cardiac scan and a head-and-neck scan using 3D phantom and noisy cone-beam projections. RESULTS: With the slab simulation, the PCP had little or no biases when the expected counts were sufficiently large, even though a probability of count loss (PCL) due to dead-time loss or PP was as high as 0.8. In contrast, the LCP had significant biases (>±2 cm of adipose) when the PCL was higher than 0.15. Biases were present with both PCP and LCP when the expected counts were less than 10-120 per datum, which was attributed to the fact that the maximum likelihood did not approach the asymptote. The noise of PCP was within 8% from the Cramér-Rao lower bounds for most cases when no significant bias was present. The two CT studies essentially agreed with the slab simulation study. PCP had little or no biases in the estimated basis line integrals, reconstructed basis density maps, and synthesized monoenergetic CT images. But the LCP had significant biases in basis line integrals when X-ray beams passed through lungs and near the body and neck contours, where the PCLs were above 0.15. As a consequence, basis density maps and monoenergetic CT images obtained by LCP had biases throughout the imaged space. CONCLUSION: We have developed the PCP algorithm that uses the PP-CS model. When the expected counts are more than 10-120 per datum, the PCP algorithm is statistically efficient and successfully compensates for the effect of the spectral distortion due to both PP and CS providing little or no biases in basis line integrals, basis density maps, and monoenergetic CT images regardless of count-rates. In contrast, the LCP algorithm, which models an LC due to pileup, produces severe biases when incident count-rates are high and the PCL is 0.15 or higher.


Assuntos
Fótons , Tomografia Computadorizada por Raios X , Simulação por Computador , Imagens de Fantasmas , Radiografia , Tomografia Computadorizada por Raios X/métodos
5.
Med Phys ; 49(3): 1495-1506, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-34822186

RESUMO

PURPOSE: A motion compensation method that is aimed at correcting motion artifacts of cardiac valves is proposed. The primary focus is the aortic valve. METHODS: The method is based around partial angle reconstructions and a cost function including the image entropy. A motion model is applied to approximate the cardiac motion in the temporal and spatial domain. Based on characteristic values for velocities and strain during cardiac motion, penalties for the velocity and spatial derivatives are introduced to maintain anatomically realistic motion vector fields and avoid distortions. The model addresses global elastic deformation, but not the finer and more complicated motion of the valve leaflets. RESULTS: The method is verified based on clinical data. Image quality was improved for most artifact-impaired reconstructions. An image quality study with Likert scoring of the motion artifact severity on a scale from 1 (highest image quality) to 5 (lowest image quality/extreme artifact presence) was performed. The biggest improvements after applying motion compensation were achieved for strongly artifact-impaired initial images scoring 4 and 5, resulting in an average change of the scores by - 0.59 ± 0.06 $-0.59\pm 0.06$ and - 1.33 ± 0.03 $-1.33\pm 0.03$ , respectively. In the case of artifact-free images, a chance to introduce blurring was observed and their average score was raised by 0.42 ± 0.03. CONCLUSION: Motion artifacts were consistently removed and image quality improved.


Assuntos
Valva Aórtica , Processamento de Imagem Assistida por Computador , Algoritmos , Valva Aórtica/diagnóstico por imagem , Artefatos , Processamento de Imagem Assistida por Computador/métodos , Movimento (Física) , Tomografia Computadorizada por Raios X
6.
Med Phys ; 49(8): 5014-5037, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35651302

RESUMO

BACKGROUND: Various clinical studies show the potential for a wider quantitative role of diagnostic X-ray computed tomography (CT) beyond size measurements. Currently, the clinical use of attenuation values is, however, limited due to their lack of robustness. This issue can be observed even on the same scanner across patient size and positioning. There are different causes for the lack of robustness in the attenuation values; one possible source of error is beam hardening of the X-ray source spectrum. The conventional and well-established approach to address this issue is a calibration-based single material beam hardening correction (BHC) using a water cylinder. PURPOSE: We investigate an alternative approach for single-material BHC with the aim of producing a more robust result for the attenuation values. The underlying hypothesis of this investigation is that calibration-based BHC automatically corrects for scattered radiation in a manner that is suboptimal in terms of bias as soon as the scanned object strongly deviates from the water cylinder used for calibration. METHODS: The approach we propose performs BHC via an analytical energy response model that is embedded into a correction pipeline that efficiently estimates and subtracts scattered radiation in a patient-specific manner prior to BHC. The estimation of scattered radiation is based on minimizing, in average, the squared difference between our corrected data and the vendor-calibrated data. The used energy response model is considering the spectral effects of the detector response and the prefiltration of the source spectrum, including a beam-shaping bowtie filter. The performance of the correction pipeline is first characterized with computer simulated data. Afterward, it is tested using real 3-D CT data sets of two different phantoms, with various kV settings and phantom positions, assuming a circular data acquisition. The results are compared in the image domain to those from the scanner. RESULTS: For experiments with a water cylinder, the proposed correction pipeline leads to similar results as the vendor. For reconstructions of a QRM liver phantom with extension ring, the proposed correction pipeline achieved a more uniform and stable outcome in the attenuation values of homogeneous materials within the phantom. For example, the root mean squared deviation between centered and off-centered phantom positioning was reduced from 6.6 to 1.8 HU in one profile. CONCLUSIONS: We have introduced a patient-specific approach for single-material BHC in diagnostic CT via the use of an analytical energy response model. This approach shows promising improvements in terms of robustness of attenuation values for large patient sizes. Our results contribute toward improving CT images so as to make CT attenuation values more reliable for use in clinical practice.


Assuntos
Tomografia Computadorizada por Raios X , Água , Algoritmos , Calibragem , Humanos , Imagens de Fantasmas , Tomografia Computadorizada por Raios X/métodos , Raios X
7.
Biomed Phys Eng Express ; 8(2)2022 02 18.
Artigo em Inglês | MEDLINE | ID: mdl-34983885

RESUMO

The problem of data truncation in Computed Tomography (CT) is caused by the missing data when the patient exceeds the Scan Field of View (SFOV) of a CT scanner. The reconstruction of a truncated scan produces severe truncation artifacts both inside and outside the SFOV. We have employed a deep learning-based approach to extend the field of view and suppress truncation artifacts. Thereby, our aim is to generate a good estimate of the real patient data and not to provide a perfect and diagnostic image even in regions beyond the SFOV of the CT scanner. This estimate could then be used as an input to higher order reconstruction algorithms [1]. To evaluate the influence of the network structure and layout on the results, three convolutional neural networks (CNNs), in particular a general CNN called ConvNet, an autoencoder, and the U-Net architecture have been investigated in this paper. Additionally, the impact of L1, L2, structural dissimilarity and perceptual loss functions on the neural network's learning have been assessed and evaluated. The evaluation of data set comprising 12 truncated test patients demonstrated that the U-Net in combination with the structural dissimilarity loss showed the best performance in terms of image restoration in regions beyond the SFOV of the CT scanner. Moreover, this network produced the best mean absolute error, L1, L2, and structural dissimilarity evaluation measures on the test set compared to other applied networks. Therefore, it is possible to achieve truncation artifact removal using deep learning techniques.


Assuntos
Aprendizado Profundo , Artefatos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Tomografia Computadorizada por Raios X/métodos
8.
Med Phys ; 48(7): 3479-3499, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33838055

RESUMO

PURPOSE: In this work, we explore the potential of region-of-interest (ROI) imaging in x-ray computed tomography (CT). Using two dynamic beam attenuator (DBA) concepts for fluence field modulation (FFM) previously developed, we investigate and evaluate the potential dose savings in comparison with current FFM technology. METHODS: ROI imaging is a special application of FFM where the bulk of x-ray radiation is propagated toward a certain anatomical target (ROI), specified by the imaging task, while the surrounding tissue is spared from radiation. We introduce a criterion suitable to quantitatively describe the balance between image quality inside an ROI and total radiation dose with respect to a given ROI imaging task. It accounts for the mean image variance at the ROI and the effective patient dose calculated from Monte Carlo simulations. The criterion is further used to compile task-specific DBA trajectories determining the primary x-ray fluence, and eventually used for comparing different FFM techniques, namely the sheet-based dynamic beam attenuator (sbDBA), the z-aligned sbDBA (z-sbDBA), and an adjustable static operation mode of the z-sbDBA. Furthermore, two static bowtie filters and the influence of tube current modulation (TCM) are included in the comparison. RESULTS: Our findings demonstrate by simulations that the presented trajectory optimization method determines reasonable DBA trajectories. The influence of TCM is strongly depending on the imaging task. The narrow bowtie filter allows for dose reductions of about 10% compared to the regular bowtie filter in the considered ROI imaging tasks. The DBAs are shown to realize substantially larger dose reductions. In our cardiac imaging scenario, the DBAs can reduce the effective dose by about 30% (z-sbDBA) or 60% (sbDBA). We can further verify that the noise characteristics are not adversely affected by the DBAs. CONCLUSION: Our research demonstrates that ROI imaging using the presented DBA concepts is a promising technique toward a more patient- and task-specific CT imaging requiring lower radiation dose. Both the sbDBA and the z-sbDBA are potential technical solutions for realizing ROI imaging in x-ray CT.


Assuntos
Tecnologia , Tomografia Computadorizada por Raios X , Humanos , Método de Monte Carlo , Imagens de Fantasmas , Doses de Radiação , Raios X
9.
Med Phys ; 48(9): 4824-4842, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34309837

RESUMO

PURPOSE: Dual-source computed tomography (DSCT) uses two source-detector pairs offset by about 90°. In addition to the well-known forward scatter, a special issue in DSCT is cross-scattered radiation from X-ray tube A detected in the detector of system B and vice versa. This effect can lead to artifacts and reduction of the contrast-to-noise ratio of the images. The purpose of this work is to present and evaluate different deep learning-based methods for scatter correction in DSCT. METHODS: We present different neural network-based methods for forward and cross-scatter correction in DSCT. These deep scatter estimation (DSE) methods mainly differ in the input and output information that is provided for training and inference and in whether they operate on two-dimensional (2D) or on three-dimensional (3D) data. The networks are trained and validated with scatter distributions obtained by our in-house Monte Carlo simulation. The simulated geometry is adapted to a realistic clinical setup. RESULTS: All DSE approaches reduce scatter-induced artifacts and lead to superior results than the measurement-based scatter correction. Forward scatter, under the presence of cross-scatter, is best estimated either by our network that uses the current projection and a couple of neighboring views (fDSE 2D few views) or by our 3D network that processes all projections simultaneously (fDSE 3D). Cross-scatter, under the presence of forward scatter, is best estimated using xSSE XDSE 2D, with xSSE referring to a quick single scatter estimate of cross scatter, or by xDSE 3D that uses all projections simultaneously. By using our proposed networks, the total scatter error in dual could be reduced from about 18 HU to approximately 3 HU. CONCLUSIONS: Deep learning-based scatter correction can reduce scatter artifacts in DSCT. To achieve more accurate cross-scatter estimations, the use of a cross-scatter approximation improves the results. Also, the ability to leverage across different projection angles improves the precision of the algorithm.


Assuntos
Aprendizado Profundo , Algoritmos , Artefatos , Tomografia Computadorizada de Feixe Cônico , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Espalhamento de Radiação , Tomografia Computadorizada por Raios X
10.
Med Phys ; 48(7): 3559-3571, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33959983

RESUMO

PURPOSE: During a typical cardiac short scan, the heart can move several millimeters. As a result, the corresponding CT reconstructions may be corrupted by motion artifacts. Especially the assessment of small structures, such as the coronary arteries, is potentially impaired by the presence of these artifacts. In order to estimate and compensate for coronary artery motion, this manuscript proposes the deep partial angle-based motion compensation (Deep PAMoCo). METHODS: The basic principle of the Deep PAMoCo relies on the concept of partial angle reconstructions (PARs), that is, it divides the short scan data into several consecutive angular segments and reconstructs them separately. Subsequently, the PARs are deformed according to a motion vector field (MVF) such that they represent the same motion state and summed up to obtain the final motion-compensated reconstruction. However, in contrast to prior work that is based on the same principle, the Deep PAMoCo estimates and applies the MVF via a deep neural network to increase the computational performance as well as the quality of the motion compensated reconstructions. RESULTS: Using simulated data, it could be demonstrated that the Deep PAMoCo is able to remove almost all motion artifacts independent of the contrast, the radius and the motion amplitude of the coronary artery. In any case, the average error of the CT values along the coronary artery is about 25 HU while errors of up to 300 HU can be observed if no correction is applied. Similar results were obtained for clinical cardiac CT scans where the Deep PAMoCo clearly outperforms state-of-the-art coronary artery motion compensation approaches in terms of processing time as well as accuracy. CONCLUSIONS: The Deep PAMoCo provides an efficient approach to increase the diagnostic value of cardiac CT scans even if they are highly corrupted by motion.


Assuntos
Vasos Coronários , Aprendizado Profundo , Algoritmos , Artefatos , Vasos Coronários/diagnóstico por imagem , Processamento de Imagem Assistida por Computador , Movimento (Física) , Imagens de Fantasmas , Tomografia Computadorizada por Raios X
11.
Med Phys ; 37(2): 897-906, 2010 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-20229899

RESUMO

PURPOSE: To determine the constancy of z-axis spatial resolution, CT number, image noise, and the potential for image artifacts for nonconstant velocity spiral CT data reconstructed using a flexibly weighted 3D filtered backprojection (WFBP) reconstruction algorithm. METHODS: A WFBP reconstruction algorithm was used to reconstruct stationary (axial, pitch=0), constant velocity spiral (pitch = 0.35-1.5) and nonconstant velocity spiral CT data acquired using a 128 x 0.6 mm acquisition mode (38.4 mm total detector length, z-flying focal spot technique), and a gantry rotation time of 0.30 s. Nonconstant velocity scans used the system's periodic spiral mode, where the table moved in and out of the gantry in a cyclical manner. For all scan types, the volume CTDI was 10 mGy. Measurements of CT number, image noise, and the slice sensitivity profile were made for all scan types as a function of the nominal slice width, table velocity, and position within the scan field of view. A thorax phantom was scanned using all modes and reconstructed transverse and coronal plane images were compared. RESULTS: Negligible differences in slice thickness, CT number, noise, or artifacts were found between scan modes for data taken at two positions within the scan field of view. For nominal slices of 1.0-3.0 mm, FWHM values of the slice sensitivity profiles were essentially independent of the scan type. For periodic spiral scans, FWHM values measured at the center of the scan range were indistinguishable from those taken 5 mm from one end of the scan range. All CT numbers were within +/- 5 HU, and CT number and noise values were similar for all scan modes assessed. A slight increase in noise and artifact level was observed 5 mm from the start of the scan on the first pass of the periodic spiral. On subsequent passes, noise and artifact level in the transverse and coronal plane images were the same for all scan modes. CONCLUSIONS: Nonconstant velocity periodic spiral scans can achieve z-axis spatial resolution, CT number accuracy, image noise and artifact level equivalent to those for stationary (axial), and constant velocity spiral scans. Thus, periodic spiral scans are expected to allow assessment of four-dimensional CT data for scan lengths greater than the detector width without sacrificing image quality.


Assuntos
Algoritmos , Imageamento Tridimensional/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada Espiral/métodos , Humanos , Imagens de Fantasmas , Intensificação de Imagem Radiográfica/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
12.
Tsinghua Sci Technol ; 15(1): 36-43, 2010 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-21814455

RESUMO

We present a theoretically-exact and stable computed tomography (CT) reconstruction algorithm that is capable of handling interrupted illumination and therefore of using all measured data at arbitrary pitch. This algorithm is based on a differentiated backprojection (DBP) on M-lines. First, we discuss the problem of interrupted illumination and how it affects the DBP. Then we show that it is possible to take advantage of some properties of the DBP to compensate for the effects of interrupted illumination in a mathematically exact way. From there, we have developed an efficient algorithm which we have successfully implemented. We show encouraging preliminary results using both computer-simulated data and real data. Our results show that our method is capable of achieving a substantial reduction of image noise when decreasing the helix pitch compared with the maximum pitch case. We conclude that the proposed algorithm defines for the first time a theoretically-exact and stable reconstruction method that is capable of beneficially using all measured data at arbitrary pitch.

13.
Med Phys ; 47(10): 4827-4837, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32754971

RESUMO

PURPOSE: We present a new concept for dynamic fluence field modulation (FFM) in x-ray computed tomography (CT). The so-called z-aligned sheet-based dynamic beam attenuator (z-sbDBA) is developed to dynamically compensate variations in patient attenuation across the fan beam and the projection angle. The goal is to enhance image quality and to reduce patient radiation dose. METHODS: The z-sbDBA consists of an array of attenuation sheets aligned along the z direction. In neutral position, the array is focused toward the focal spot. Tilting the z-sbDBA defocuses the sheets, thus reducing the transmission for larger fan beam angles. The structure of the z-sbDBA significantly differs from the previous sheet-based dynamic beam attenuator (sbDBA) in two features: (a) The sheets of the z-sbDBA are aligned parallel to the detector rows, and (b) the height of the sheets increases from the center toward larger fan beam angles. We built a motor actuated prototype of the z-sbDBA integrated into a clinical CT scanner. In experiments, we investigated its feasibility for FFM. We compared the z-sbDBA to common CT bowtie filters in terms of the spectral dependency of the transmission and possible image variance distribution in reconstructed phantom images. Additionally, the potential radiation dose saving using z-sbDBA for region-of-interest (ROI) imaging was studied. RESULTS: Our experimental results confirm that the z-sbDBA can realize variable transmission profiles of the radiation fluence by only small tilts. Compared to the sbDBA, the z-sbDBA can mitigate some practical and mechanical issues. In comparison to bowtie filters, the spectral dependency is considerably reduced when using the z-sbDBA. Likewise, more homogeneous image variance distributions can be attained in reconstructed phantom images. The z-sbDBA allows controlling the spatial image variance distribution which makes it suitable for ROI imaging. Our comparison on ROI imaging reveals skin dose reductions of up to 35% at equal ROI image quality by using the z-sbDBA. CONCLUSION: Our new concept for FFM in x-ray CT, the z-sbDBA, was experimentally validated on a clinical CT scanner. It facilitates dynamic FFM by realizing variable transmission profiles across the fan beam angle on a projection-wise basis. This key feature allows for substantial improvements in image quality, a reduction in patient radiation dose, and additionally provides a technical solution for ROI imaging.


Assuntos
Tomografia Computadorizada por Raios X , Humanos , Imagens de Fantasmas , Doses de Radiação , Raios X
14.
Med Phys ; 36(12): 5641-53, 2009 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-20095277

RESUMO

PURPOSE: To present the theory for image reconstruction of a high-pitch, high-temporal-resolution spiral scan mode for dual-source CT (DSCT) and evaluate its image quality and dose. METHODS: With the use of two x-ray sources and two data acquisition systems, spiral CT exams having a nominal temporal resolution per image of up to one-quarter of the gantry rotation time can be acquired using pitch values up to 3.2. The scan field of view (SFOV) for this mode, however, is limited to the SFOV of the second detector as a maximum, depending on the pitch. Spatial and low contrast resolution, image uniformity and noise, CT number accuracy and linearity, and radiation dose were assessed using the ACR CT accreditation phantom, a 30 cm diameter cylindrical water phantom or a 32 cm diameter cylindrical PMMA CTDI phantom. Slice sensitivity profiles (SSPs) were measured for different nominal slice thicknesses, and an anthropomorphic phantom was used to assess image artifacts. Results were compared between single-source scans at pitch = 1.0 and dual-source scans at pitch = 3.2. In addition, image quality and temporal resolution of an ECG-triggered version of the DSCT high-pitch spiral scan mode were evaluated with a moving coronary artery phantom, and radiation dose was assessed in comparison with other existing cardiac scan techniques. RESULTS: No significant differences in quantitative measures of image quality were found between single-source scans at pitch = 1.0 and dual-source scans at pitch = 3.2 for spatial and low contrast resolution, CT number accuracy and linearity, SSPs, image uniformity, and noise. The pitch value (1.6 pitch 3.2) had only a minor impact on radiation dose and image noise when the effective tube current time product (mA s/pitch) was kept constant. However, while not severe, artifacts were found to be more prevalent for the dual-source pitch = 3.2 scan mode when structures varied markedly along the z axis, particularly for head scans. Images of the moving coronary artery phantom acquired with the ECG-triggered high-pitch scan mode were visually free from motion artifacts at heart rates of 60 and 70 bpm. However, image quality started to deteriorate for higher heart rates. At equivalent image quality, the ECG-triggered high-pitch scan mode demonstrated lower radiation dose than other cardiac scan techniques on the same DSCT equipment (25% and 60% dose reduction compared to ECG-triggered sequential step-and-shoot and ECG-gated spiral with x-ray pulsing). CONCLUSIONS: A high-pitch (up to pitch = 3.2), high-temporal-resolution (up to 75 ms) dual-source CT scan mode produced equivalent image quality relative to single-source scans using a more typical pitch value (pitch = 1.0). The resultant reduction in the overall acquisition time may offer clinical advantage for cardiovascular, trauma, and pediatric CT applications. In addition, ECG-triggered high-pitch scanning may be useful as an alternative to ECG-triggered sequential scanning for patients with low to moderate heart rates up to 70 bpm, with the potential to scan the heart within one heart beat at reduced radiation dose.


Assuntos
Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada Espiral/métodos , Artefatos , Modelos Lineares , Imagens de Fantasmas , Doses de Radiação , Fatores de Tempo
15.
Phys Med Biol ; 54(15): 4625-44, 2009 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-19590120

RESUMO

We present a new image reconstruction algorithm for helical cone-beam computed tomography (CT). This algorithm is designed for data collected at or near maximum pitch, and provides a theoretically exact and stable reconstruction while beneficially using all measured data. The main operations involved are a differentiated backprojection and a finite-support Hilbert transform inversion. These operations are applied onto M-lines, and the beneficial use of all measured data is gained from averaging three volumes reconstructed each with a different choice of M-lines. The technique is overall similar to that presented by one of the authors in a previous publication, but operates volume-wise, instead of voxel-wise, which yields a significantly more efficient reconstruction procedure. The algorithm is presented in detail. Also, preliminary results from computer-simulated data are provided to demonstrate the numerical stability of the algorithm, the beneficial use of redundant data and the ability to process data collected with an angular flying focal spot.


Assuntos
Tomografia Computadorizada de Feixe Cônico/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Cabeça/diagnóstico por imagem , Modelos Biológicos , Imagens de Fantasmas , Radiografia Torácica , Reprodutibilidade dos Testes
16.
Phys Med Biol ; 64(10): 105008, 2019 05 10.
Artigo em Inglês | MEDLINE | ID: mdl-30965298

RESUMO

PURPOSE: To find comprehensive equations for the frequency-dependent MTF and DQE of photon counting detectors including the effect that the combination of crosstalk with an energy threshold is changing the pixel sensitivity profile and to compare the results with measurements. METHODS: The framework of probability-generating functions (PGF) is used to find a simple method to derive the MTF and the DQE directly from a Monte-Carlo model of the detection process. RESULTS: In combination with realistic model parameters for the detector, the method is used to predict the MTF and the DQE for different pixel sizes and thresholds. Particularly for small pixels, the modification of the sensitivity profile due to crosstalk substantially affects the frequency dependence of both quantities. CONCLUSION: The phenomenon of the pixel sensitivity profile, i.e. the fact that the choice of the threshold is affecting the detector sharpness, may play a substantial role in exploiting the full potential of photon counting detectors. The model compares well with measurements: with only two model parameters, the model can predict the MTF(f) and the DQE(f) for a wide range of thresholds.


Assuntos
Modelos Teóricos , Método de Monte Carlo , Fótons , Radiometria/instrumentação , Humanos , Razão Sinal-Ruído
17.
Med Phys ; 46(11): 4777-4791, 2019 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-31444974

RESUMO

INTRODUCTION: In cardiac computed tomography (CT), irregular motion may lead to unique artifacts for scanners with a longitudinal collimation that does not cover the entire heart. Given partial coverage, subvolumes, or stacks, may be reconstructed and used to assemble a final CT volume. Irregular motion, for example, due to cardiac arrhythmia or breathing, may cause mismatch between neighboring stacks and therefore discontinuities within the final CT volume. The aim of this work is the removal of the discontinuities that are hereafter referred to as stack transition artifacts. METHOD AND MATERIALS: A stack transition artifact removal (STAR) is achieved using a symmetric deformable image registration. A symmetric Demons algorithm was implemented and applied to stacks to remove mismatch and therefore the stack transition artifacts. The registration can be controlled with one parameter that affects the smoothness of the deformation vector field (DVF). The latter is crucial for realistically transforming the stacks. Different smoothness settings as well as an entirely automatic parameter selection that considers the required deformation magnitude for each registration were tested with patient data. Thirteen datasets were evaluated. Simulations were performed on two additional datasets. RESULTS AND CONCLUSION: STAR considerably improved image quality while computing realistic DVFs. Discontinuities, for example, appearing as breaks or cuts in coronary arteries or cardiac valves, were removed or considerably reduced. A constant smoothing parameter that ensured satisfactory results for all datasets was found. The automatic parameter selection was able to find a proper setting for each individual dataset. Consequently, no over regularization of the DVF occurred that would unnecessarily limit the registration accuracy for cases with small deformations. The automatic parameter selection yielded the best overall results and provided a registration method for cardiac data that does not require user input.


Assuntos
Artefatos , Coração/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X
18.
Med Phys ; 46(12): 5528-5537, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31348527

RESUMO

PURPOSE: It has been a long-standing wish in computed tomography (CT) to compensate the emitted x-ray beam intensity for the patient's changing attenuation during the rotation of a CT data acquisition. The patient attenuation changes both spatially, along the fan beam angle, and temporally, between different projections. By modifying the pre-patient x-ray intensity profile according to the attenuation properties of the given object, image noise can be homogenized and dose can be delivered where it is really needed. Current state-of-the-art bowtie filters are not capable of changing attenuation profiles during the CT data acquisition. In our work, we present the sheet-based dynamic beam attenuator (sbDBA), a novel technical concept enabling dynamic shaping of the transmission profile. METHODS: The sbDBA consists of an array of closely spaced, highly attenuating metal sheets, focused toward the focal spot. Intensity modulation can be achieved by controlled defocusing of the array such that the attenuation of the x-ray fan beam depends on the fan angle. The sbDBA concept was evaluated in Monte-Carlo (MC) simulations regarding its spectral and scattering properties. A prototype of the sbDBA was installed in a clinical CT scanner and measurements evaluating the feasibility and the performance of the sbDBA concept were carried out. RESULTS: Experimental measurements on a CT scanner demonstrate the ability of the sbDBA to produce an attenuation profile that can be changed in width and location. Furthermore, the sbDBA shows constant transmission properties at various tube voltages. A small effect of the flying focal spot (FFS) position on the transmission profile can be observed. MC simulations confirm the essential properties of the sbDBA: In contrast to conventional bowtie filters, the sbDBA has almost no impact on the energy spectrum of the beam and there is negligible scatter emission toward the patient. CONCLUSIONS: A new concept for dynamic beam attenuation has been presented and its ability to dynamically shape the transmission profile has successfully been demonstrated. Advantages compared to regular bowtie filters including the lack of filter-induced beam hardening and scatter have been confirmed. The novel concept of a DBA paves the way toward region of interest (ROI) imaging and further reductions in patient dose.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X , Abdome/diagnóstico por imagem , Humanos , Método de Monte Carlo , Imagens de Fantasmas , Espalhamento de Radiação , Software
19.
Med Phys ; 46(12): e835-e854, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31811793

RESUMO

PURPOSE: Model-based iterative reconstruction is a promising approach to achieve dose reduction without affecting image quality in diagnostic x-ray computed tomography (CT). In the problem formulation, it is common to enforce non-negative values to accommodate the physical non-negativity of x-ray attenuation. Using this a priori information is believed to be beneficial in terms of image quality and convergence speed. However, enforcing non-negativity imposes limitations on the problem formulation and the choice of optimization algorithm. For these reasons, it is critical to understand the value of the non-negativity constraint. In this work, we present an investigation that sheds light on the impact of this constraint. METHODS: We primarily focus our investigation on the examination of properties of the converged solution. To avoid any possibly confounding bias, the reconstructions are all performed using a provably converging algorithm started from a zero volume. To keep the computational cost manageable, an axial CT scanning geometry with narrow collimation is employed. The investigation is divided into five experimental studies that challenge the non-negativity constraint in various ways, including noise, beam hardening, parametric choices, truncation, and photon starvation. These studies are complemented by a sixth one that examines the effect of using ordered subsets to obtain a satisfactory approximate result within 50 iterations. All studies are based on real data, which come from three phantom scans and one clinical patient scan. The reconstructions with and without the non-negativity constraint are compared in terms of image similarity and convergence speed. In select cases, the image similarity evaluation is augmented with quantitative image quality metrics such as the noise power spectrum and closeness to a known ground truth. RESULTS: For cases with moderate inconsistencies in the data, associated with noise and bone-induced beam hardening, our results show that the non-negativity constraint offers little benefit. By varying the regularization parameters in one of the studies, we observed that sufficient edge-preserving regularization tends to dilute the value of the constraint. For cases with strong data inconsistencies, the results are mixed: the constraint can be both beneficial and deleterious; in either case, however, the difference between using the constraint or not is small relative to the overall level of error in the image. The results with ordered subsets are encouraging in that they show similar observations. In terms of convergence speed, we only observed one major effect, in the study with data truncation; this effect favored the use of the constraint, but had no impact on our ability to obtain the converged solution without constraint. CONCLUSIONS: Our results did not highlight the non-negativity constraint as being strongly beneficial for diagnostic CT imaging. Altogether, we thus conclude that in some imaging scenarios, the non-negativity constraint could be disregarded to simplify the optimization problem or to adopt other forward projection models that require complex optimization machinery to be used together with non-negativity.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Modelos Teóricos , Tomografia Computadorizada por Raios X , Algoritmos , Artefatos , Quadril/diagnóstico por imagem , Humanos , Metais , Imagens de Fantasmas , Doses de Radiação
20.
Artigo em Inglês | MEDLINE | ID: mdl-33304618

RESUMO

The aim of this study was to develop and validate a simulation platform that generates photon-counting CT images of voxelized phantoms with detailed modeling of manufacturer-specific components including the geometry and physics of the x-ray source, source filtrations, anti-scatter grids, and photon-counting detectors. The simulator generates projection images accounting for both primary and scattered photons using a computational phantom, scanner configuration, and imaging settings. Beam hardening artifacts are corrected using a spectrum and threshold dependent water correction algorithm. Physical and computational versions of a clinical phantom (ACR) were used for validation purposes. The physical phantom was imaged using a research prototype photon-counting CT (Siemens Healthcare) with standard (macro) mode, at four dose levels and with two energy thresholds. The computational phantom was imaged with the developed simulator with the same parameters and settings used in the actual acquisition. Images from both the real and simulated acquisitions were reconstructed using a reconstruction software (FreeCT). Primary image quality metrics such as noise magnitude, noise ratio, noise correlation coefficients, noise power spectrum, CT number, in-plane modulation transfer function, and slice sensitivity profiles were extracted from both real and simulated data and compared. The simulator was further evaluated for imaging contrast materials (bismuth, iodine, and gadolinium) at three concentration levels and six energy thresholds. Qualitatively, the simulated images showed similar appearance to the real ones. Quantitatively, the average relative error in image quality measurements were all less than 4% across all the measurements. The developed simulator will enable systematic optimization and evaluation of the emerging photon-counting computed tomography technology.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa