Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Med Phys ; 51(3): 1597-1616, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38227833

RESUMO

BACKGROUND: Multislice spiral computed tomography (MSCT) requires an interpolation between adjacent detector rows during backprojection. Not satisfying the Nyquist sampling condition along the z-axis results in aliasing effects, also known as windmill artifacts. These image distortions are characterized by bright streaks diverging from high contrast structures. PURPOSE: The z-flying focal spot (zFFS) is a well-established hardware-based solution that aims to double the sampling rate in longitudinal direction and therefore reduce aliasing artifacts. However, given the technical complexity of the zFFS, this work proposes a deep learning-based approach as an alternative solution. METHODS: We propose a supervised learning approach to perform a mapping between input projections and the corresponding rows required for double sampling in the z-direction. We present a comprehensive evaluation using both a clinical dataset obtained using raw data from 40 real patient scans acquired with zFFS and a synthetic dataset consisting of 100 simulated spiral scans using a phantom specifically designed for our problem. For the clinical dataset, we utilized 32 scans as training set and 8 scans as validation set, whereas for the synthetic dataset, we used 80 scans for training and 20 scans for validation purposes. Both qualitative and quantitative assessments are conducted on a test set consisting of nine real patient scans and six phantom measurements to validate the performance of our approach. A simulation study was performed to investigate the robustness against different scan configurations in terms of detector collimation and pitch value. RESULTS: In the quantitative comparison based on clinical patient scans from the test set, all network configurations show an improvement in the root mean square error (RMSE) of approximately 20% compared to neglecting the doubled longitudinal sampling by the zFFS. The results of the qualitative analysis indicate that both clinical and synthetic training data can reduce windmill artifacts through the application of a correspondingly trained network. Together with the qualitative results from the test set phantom measurements it is emphasized that a training of our method with synthetic data resulted in superior performance in windmill artifact reduction. CONCLUSIONS: Deep learning-based raw data interpolation has the potential to enhance the sampling in z-direction and thus minimize aliasing effects, as it is the case with the zFFS. Especially a training with synthetic data showed promising results. While it may not outperform zFFS, our method represents a beneficial solution for CT scanners lacking the necessary hardware components for zFFS.


Assuntos
Artefatos , Aprendizado Profundo , Humanos , Tomografia Computadorizada Espiral/métodos , Tomógrafos Computadorizados , Imagens de Fantasmas , Processamento de Imagem Assistida por Computador/métodos , Algoritmos
2.
Med Phys ; 51(3): 1822-1831, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37650780

RESUMO

BACKGROUND: Due to technical constraints, dual-source dual-energy CT scans may lack spectral information in the periphery of the patient. PURPOSE: Here, we propose a deep learning-based iterative reconstruction to recover the missing spectral information outside the field of measurement (FOM) of the second source-detector pair. METHODS: In today's Siemens dual-source CT systems, one source-detector pair (referred to as A) typically has a FOM of about 50 cm, while the FOM of the other pair (referred to as B) is limited by technical constraints to a diameter of about 35 cm. As a result, dual-energy applications are currently only available within the small FOM, limiting their use for larger patients. To derive a reconstruction at B's energy for the entire patient cross-section, we propose a deep learning-based iterative reconstruction. Starting with A's reconstruction as initial estimate, it employs a neural network in each iteration to refine the current estimate according to a raw data fidelity measure. Here, the corresponding mapping is trained using simulated chest, abdomen, and pelvis scans based on a data set containing 70 full body CT scans. Finally, the proposed approach is tested on simulated and measured dual-source dual-energy scans and compared against existing reference approaches. RESULTS: For all test cases, the proposed approach was able to provide artifact-free CT reconstructions of B for the entire patient cross-section. Considering simulated data, the remaining error of the reconstructions is between 10 and 17 HU on average, which is about half as low as the reference approaches. A similar performance with an average error of 8 HU could be achieved for real phantom measurements. CONCLUSIONS: The proposed approach is able to recover missing dual-energy information for patients exceeding the small 35 cm FOM of dual-source CT systems. Therefore, it potentially allows to extend dual-energy applications to the entire-patient cross section.


Assuntos
Aprendizado Profundo , Humanos , Tomografia Computadorizada por Raios X , Tórax , Imagens de Fantasmas , Algoritmos , Processamento de Imagem Assistida por Computador
3.
Biomed Phys Eng Express ; 8(2)2022 02 18.
Artigo em Inglês | MEDLINE | ID: mdl-34983885

RESUMO

The problem of data truncation in Computed Tomography (CT) is caused by the missing data when the patient exceeds the Scan Field of View (SFOV) of a CT scanner. The reconstruction of a truncated scan produces severe truncation artifacts both inside and outside the SFOV. We have employed a deep learning-based approach to extend the field of view and suppress truncation artifacts. Thereby, our aim is to generate a good estimate of the real patient data and not to provide a perfect and diagnostic image even in regions beyond the SFOV of the CT scanner. This estimate could then be used as an input to higher order reconstruction algorithms [1]. To evaluate the influence of the network structure and layout on the results, three convolutional neural networks (CNNs), in particular a general CNN called ConvNet, an autoencoder, and the U-Net architecture have been investigated in this paper. Additionally, the impact of L1, L2, structural dissimilarity and perceptual loss functions on the neural network's learning have been assessed and evaluated. The evaluation of data set comprising 12 truncated test patients demonstrated that the U-Net in combination with the structural dissimilarity loss showed the best performance in terms of image restoration in regions beyond the SFOV of the CT scanner. Moreover, this network produced the best mean absolute error, L1, L2, and structural dissimilarity evaluation measures on the test set compared to other applied networks. Therefore, it is possible to achieve truncation artifact removal using deep learning techniques.


Assuntos
Aprendizado Profundo , Artefatos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Tomografia Computadorizada por Raios X/métodos
4.
Med Phys ; 49(3): 1495-1506, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-34822186

RESUMO

PURPOSE: A motion compensation method that is aimed at correcting motion artifacts of cardiac valves is proposed. The primary focus is the aortic valve. METHODS: The method is based around partial angle reconstructions and a cost function including the image entropy. A motion model is applied to approximate the cardiac motion in the temporal and spatial domain. Based on characteristic values for velocities and strain during cardiac motion, penalties for the velocity and spatial derivatives are introduced to maintain anatomically realistic motion vector fields and avoid distortions. The model addresses global elastic deformation, but not the finer and more complicated motion of the valve leaflets. RESULTS: The method is verified based on clinical data. Image quality was improved for most artifact-impaired reconstructions. An image quality study with Likert scoring of the motion artifact severity on a scale from 1 (highest image quality) to 5 (lowest image quality/extreme artifact presence) was performed. The biggest improvements after applying motion compensation were achieved for strongly artifact-impaired initial images scoring 4 and 5, resulting in an average change of the scores by - 0.59 ± 0.06 $-0.59\pm 0.06$ and - 1.33 ± 0.03 $-1.33\pm 0.03$ , respectively. In the case of artifact-free images, a chance to introduce blurring was observed and their average score was raised by 0.42 ± 0.03. CONCLUSION: Motion artifacts were consistently removed and image quality improved.


Assuntos
Valva Aórtica , Processamento de Imagem Assistida por Computador , Algoritmos , Valva Aórtica/diagnóstico por imagem , Artefatos , Processamento de Imagem Assistida por Computador/métodos , Movimento (Física) , Tomografia Computadorizada por Raios X
5.
Med Phys ; 48(9): 4824-4842, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34309837

RESUMO

PURPOSE: Dual-source computed tomography (DSCT) uses two source-detector pairs offset by about 90°. In addition to the well-known forward scatter, a special issue in DSCT is cross-scattered radiation from X-ray tube A detected in the detector of system B and vice versa. This effect can lead to artifacts and reduction of the contrast-to-noise ratio of the images. The purpose of this work is to present and evaluate different deep learning-based methods for scatter correction in DSCT. METHODS: We present different neural network-based methods for forward and cross-scatter correction in DSCT. These deep scatter estimation (DSE) methods mainly differ in the input and output information that is provided for training and inference and in whether they operate on two-dimensional (2D) or on three-dimensional (3D) data. The networks are trained and validated with scatter distributions obtained by our in-house Monte Carlo simulation. The simulated geometry is adapted to a realistic clinical setup. RESULTS: All DSE approaches reduce scatter-induced artifacts and lead to superior results than the measurement-based scatter correction. Forward scatter, under the presence of cross-scatter, is best estimated either by our network that uses the current projection and a couple of neighboring views (fDSE 2D few views) or by our 3D network that processes all projections simultaneously (fDSE 3D). Cross-scatter, under the presence of forward scatter, is best estimated using xSSE XDSE 2D, with xSSE referring to a quick single scatter estimate of cross scatter, or by xDSE 3D that uses all projections simultaneously. By using our proposed networks, the total scatter error in dual could be reduced from about 18 HU to approximately 3 HU. CONCLUSIONS: Deep learning-based scatter correction can reduce scatter artifacts in DSCT. To achieve more accurate cross-scatter estimations, the use of a cross-scatter approximation improves the results. Also, the ability to leverage across different projection angles improves the precision of the algorithm.


Assuntos
Aprendizado Profundo , Algoritmos , Artefatos , Tomografia Computadorizada de Feixe Cônico , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Espalhamento de Radiação , Tomografia Computadorizada por Raios X
6.
Med Phys ; 48(7): 3559-3571, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33959983

RESUMO

PURPOSE: During a typical cardiac short scan, the heart can move several millimeters. As a result, the corresponding CT reconstructions may be corrupted by motion artifacts. Especially the assessment of small structures, such as the coronary arteries, is potentially impaired by the presence of these artifacts. In order to estimate and compensate for coronary artery motion, this manuscript proposes the deep partial angle-based motion compensation (Deep PAMoCo). METHODS: The basic principle of the Deep PAMoCo relies on the concept of partial angle reconstructions (PARs), that is, it divides the short scan data into several consecutive angular segments and reconstructs them separately. Subsequently, the PARs are deformed according to a motion vector field (MVF) such that they represent the same motion state and summed up to obtain the final motion-compensated reconstruction. However, in contrast to prior work that is based on the same principle, the Deep PAMoCo estimates and applies the MVF via a deep neural network to increase the computational performance as well as the quality of the motion compensated reconstructions. RESULTS: Using simulated data, it could be demonstrated that the Deep PAMoCo is able to remove almost all motion artifacts independent of the contrast, the radius and the motion amplitude of the coronary artery. In any case, the average error of the CT values along the coronary artery is about 25 HU while errors of up to 300 HU can be observed if no correction is applied. Similar results were obtained for clinical cardiac CT scans where the Deep PAMoCo clearly outperforms state-of-the-art coronary artery motion compensation approaches in terms of processing time as well as accuracy. CONCLUSIONS: The Deep PAMoCo provides an efficient approach to increase the diagnostic value of cardiac CT scans even if they are highly corrupted by motion.


Assuntos
Vasos Coronários , Aprendizado Profundo , Algoritmos , Artefatos , Vasos Coronários/diagnóstico por imagem , Processamento de Imagem Assistida por Computador , Movimento (Física) , Imagens de Fantasmas , Tomografia Computadorizada por Raios X
7.
Med Phys ; 48(7): 3583-3594, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33978240

RESUMO

PURPOSE: Modern computed tomography (CT) scanners have an extended field-of-view (eFoV) for reconstructing images up to the bore size, which is relevant for patients with higher BMI or non-isocentric positioning due to fixation devices. However, the accuracy of the image reconstruction in eFoV is not well known since truncated data are used. This study introduces a new deep learning-based algorithm for extended field-of-view reconstruction and evaluates the accuracy of the eFoV reconstruction focusing on aspects relevant for radiotherapy. METHODS: A life-size three-dimensional (3D) printed thorax phantom, based on a patient CT for which eFoV was necessary, was manufactured and used as reference. The phantom has holes allowing the placement of tissue mimicking inserts used to evaluate the Hounsfield unit (HU) accuracy. CT images of the phantom were acquired using different configurations aiming to evaluate geometric and HU accuracy in the eFoV. Image reconstruction was performed using a state-of-the-art reconstruction algorithm (HDFoV), commercially available, and the novel deep learning-based approach (HDeepFoV). Five patient cases were selected to evaluate the performance of both algorithms on patient data. There is no ground truth for patients so the reconstructions were qualitatively evaluated by five physicians and five medical physicists. RESULTS: The phantom geometry reconstructed with HDFoV showed boundary deviations from 1.0 to 2.5 cm depending on the volume of the phantom outside the regular scan field of view. HDeepFoV showed a superior performance regardless of the volume of the phantom within eFOV with a maximum boundary deviation below 1.0 cm. The maximum HU (absolute) difference for soft issue inserts is below 79 and 41 HU for HDFoV and HDeepFoV, respectively. HDeepFoV has a maximum deviation of -18 HU for an inhaled lung insert while HDFoV reached a 229 HU difference. The qualitative evaluation of patient cases shows that the novel deep learning approach produces images that look more realistic and have fewer artifacts. CONCLUSION: To be able to reconstruct images outside the sFoV of the CT scanner there is no alternative than to use some kind of extrapolated data. In our study, we proposed and investigated a new deep learning-based algorithm and compared it to a commercial solution for eFoV reconstruction. The deep learning-based algorithm showed superior performance in quantitative evaluations based on phantom data and in qualitative assessments of patient data.


Assuntos
Algoritmos , Tomografia Computadorizada por Raios X , Artefatos , Humanos , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Tomógrafos Computadorizados
8.
Med Phys ; 46(11): 4777-4791, 2019 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-31444974

RESUMO

INTRODUCTION: In cardiac computed tomography (CT), irregular motion may lead to unique artifacts for scanners with a longitudinal collimation that does not cover the entire heart. Given partial coverage, subvolumes, or stacks, may be reconstructed and used to assemble a final CT volume. Irregular motion, for example, due to cardiac arrhythmia or breathing, may cause mismatch between neighboring stacks and therefore discontinuities within the final CT volume. The aim of this work is the removal of the discontinuities that are hereafter referred to as stack transition artifacts. METHOD AND MATERIALS: A stack transition artifact removal (STAR) is achieved using a symmetric deformable image registration. A symmetric Demons algorithm was implemented and applied to stacks to remove mismatch and therefore the stack transition artifacts. The registration can be controlled with one parameter that affects the smoothness of the deformation vector field (DVF). The latter is crucial for realistically transforming the stacks. Different smoothness settings as well as an entirely automatic parameter selection that considers the required deformation magnitude for each registration were tested with patient data. Thirteen datasets were evaluated. Simulations were performed on two additional datasets. RESULTS AND CONCLUSION: STAR considerably improved image quality while computing realistic DVFs. Discontinuities, for example, appearing as breaks or cuts in coronary arteries or cardiac valves, were removed or considerably reduced. A constant smoothing parameter that ensured satisfactory results for all datasets was found. The automatic parameter selection was able to find a proper setting for each individual dataset. Consequently, no over regularization of the DVF occurred that would unnecessarily limit the registration accuracy for cases with small deformations. The automatic parameter selection yielded the best overall results and provided a registration method for cardiac data that does not require user input.


Assuntos
Artefatos , Coração/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X
9.
Invest Radiol ; 53(7): 432-439, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-29543692

RESUMO

OBJECTIVES: A novel imaging technique ("X-map") has been developed to identify acute ischemic lesions for stroke patients using non-contrast-enhanced dual-energy computed tomography (NE-DE-CT). Using the 3-material decomposition technique, the original X-map ("X-map 1.0") eliminates fat and bone from the images, suppresses the gray matter (GM)-white matter (WM) tissue contrast, and makes signals of edema induced by severe ischemia easier to detect. The aim of this study was to address the following 2 problems with the X-map 1.0: (1) biases in CT numbers (or artifacts) near the skull of NE-DE-CT images and (2) large intrapatient and interpatient variations in X-map 1.0 values. MATERIALS AND METHODS: We improved both an iterative beam-hardening correction (iBHC) method and the X-map algorithm. The new iBHC (iBHC2) modeled x-ray physics more accurately. The new X-map ("X-map 2.0") estimated regional GM values-thus, maximizing the ability to suppress the GM-WM contrast, make edema signals quantitative, and enhance the edema signals that denote an increased water density for each pixel. We performed a retrospective study of 11 patients (3 men, 8 women; mean age, 76.3 years; range, 68-90 years) who presented to the emergency department with symptoms of acute stroke. Images were reconstructed with the old iBHC (iBHC1) and the iBHC2, and biases in CT numbers near the skull were measured. Both X-map 2.0 maps and X-map 1.0 maps were computed from iBHC2 images, both with and without a material decomposition-based edema signal enhancement (ESE) process. X-map values were measured at 5 to 9 locations on GM without infarct per patient; the mean value was calculated for each patient (we call it the patient-mean X-map value) and subtracted from the measured X-map values to generate zero-mean X-map values. The standard deviation of the patient-mean X-map values over multiple patients denotes the interpatient variation; the standard deviation over multiple zero-mean X-map values denotes the intrapatient variation. The Levene F test was performed to assess the difference in the standard deviations with different algorithms. Using 5 patient data who had diffusion weighted imaging (DWI) within 2 hours of NE-DE-CT, mean values at and near ischemic lesions were measured at 7 to 14 locations per patient with X-map images, CT images (low kV and high kV), and DWI images. The Pearson correlation coefficient was calculated between a normalized increase in DWI signals and either X-map or CT. RESULTS: The bias in CT numbers was lower with iBHC2 than with iBHC1 in both high- and low-kV images (2.5 ± 2.0 HU [95% confidence interval (CI), 1.3-3.8 HU] for iBHC2 vs 6.9 ± 2.3 HU [95% CI, 5.4-8.3 HU] for iBHC1 with high-kV images, P < 0.01; 1.5 ± 3.6 HU [95% CI, -0.8 to 3.7 HU] vs 12.8 ± 3.3 HU [95% CI, 10.7-14.8 HU] with low-kV images, P < 0.01). The interpatient variation was smaller with X-map 2.0 than with X-map 1.0, both with and without ESE (4.3 [95% CI, 3.0-7.6] for X-map 2.0 vs 19.0 [95% CI, 13.3-22.4] for X-map 1.0, both with ESE, P < 0.01; 3.0 [95% CI, 2.1-5.3] vs 12.0 [95% CI, 8.4-21.0] without ESE, P < 0.01). The intrapatient variation was also smaller with X-map 2.0 than with X-map 1.0 (6.2 [95% CI, 5.3-7.3] vs 8.5 [95% CI, 7.3-10.1] with ESE, P = 0.0122; 4.1 [95% CI, 3.6-4.9] vs 6.3 [95% CI, 5.5-7.6] without ESE, P < 0.01). The best 3 correlation coefficients (R) with DWI signals were -0.733 (95% CI, -0.845 to -0.560, P < 0.001) for X-map 2.0 with ESE, -0.642 (95% CI, -0.787 to -0.429, P < 0.001) for high-kV CT, and -0.609 (95% CI, -0.766 to -0.384, P < 0.001) for X-map 1.0 with ESE. CONCLUSION: Both of the 2 problems outlined in the objectives have been addressed by improving both iBHC and X-map algorithm. The iBHC2 improved the bias in CT numbers and the visibility of GM-WM contrast throughout the brain space. The combination of iBHC2 and X-map 2.0 with ESE decreased both intrapatient and interpatient variations of edema signals significantly and had a strong correlation with DWI signals in terms of the strength of edema signals.


Assuntos
Isquemia Encefálica/diagnóstico por imagem , Edema/diagnóstico por imagem , Aumento da Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Acidente Vascular Cerebral/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Encéfalo/diagnóstico por imagem , Encéfalo/fisiopatologia , Isquemia Encefálica/complicações , Isquemia Encefálica/fisiopatologia , Edema/complicações , Edema/fisiopatologia , Feminino , Humanos , Masculino , Reprodutibilidade dos Testes , Estudos Retrospectivos , Acidente Vascular Cerebral/complicações , Acidente Vascular Cerebral/fisiopatologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...