Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Phys Med Biol ; 69(8)2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38471171

RESUMO

Objective.The aim of this study was to reconstruct volumetric computed tomography (CT) images in real-time from ultra-sparse two-dimensional x-ray projections, facilitating easier navigation and positioning during image-guided radiation therapy.Approach.Our approach leverages a voxel-sapce-searching Transformer model to overcome the limitations of conventional CT reconstruction techniques, which require extensive x-ray projections and lead to high radiation doses and equipment constraints.Main results.The proposed XTransCT algorithm demonstrated superior performance in terms of image quality, structural accuracy, and generalizability across different datasets, including a hospital set of 50 patients, the large-scale public LIDC-IDRI dataset, and the LNDb dataset for cross-validation. Notably, the algorithm achieved an approximately 300% improvement in reconstruction speed, with a rate of 44 ms per 3D image reconstruction compared to former 3D convolution-based methods.Significance.The XTransCT architecture has the potential to impact clinical practice by providing high-quality CT images faster and with substantially reduced radiation exposure for patients. The model's generalizability suggests it has the potential applicable in various healthcare settings.


Assuntos
Radioterapia Guiada por Imagem , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Raios X , Tomografia Computadorizada de Feixe Cônico/métodos , Imageamento Tridimensional , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas
2.
Bioengineering (Basel) ; 10(11)2023 Nov 14.
Artigo em Inglês | MEDLINE | ID: mdl-38002438

RESUMO

The detection of Coronavirus disease 2019 (COVID-19) is crucial for controlling the spread of the virus. Current research utilizes X-ray imaging and artificial intelligence for COVID-19 diagnosis. However, conventional X-ray scans expose patients to excessive radiation, rendering repeated examinations impractical. Ultra-low-dose X-ray imaging technology enables rapid and accurate COVID-19 detection with minimal additional radiation exposure. In this retrospective cohort study, ULTRA-X-COVID, a deep neural network specifically designed for automatic detection of COVID-19 infections using ultra-low-dose X-ray images, is presented. The study included a multinational and multicenter dataset consisting of 30,882 X-ray images obtained from approximately 16,600 patients across 51 countries. It is important to note that there was no overlap between the training and test sets. The data analysis was conducted from 1 April 2020 to 1 January 2022. To evaluate the effectiveness of the model, various metrics such as the area under the receiver operating characteristic curve, receiver operating characteristic, accuracy, specificity, and F1 score were utilized. In the test set, the model demonstrated an AUC of 0.968 (95% CI, 0.956-0.983), accuracy of 94.3%, specificity of 88.9%, and F1 score of 99.0%. Notably, the ULTRA-X-COVID model demonstrated a performance comparable to conventional X-ray doses, with a prediction time of only 0.1 s per image. These findings suggest that the ULTRA-X-COVID model can effectively identify COVID-19 cases using ultra-low-dose X-ray scans, providing a novel alternative for COVID-19 detection. Moreover, the model exhibits potential adaptability for diagnoses of various other diseases.

3.
Phys Med Biol ; 68(20)2023 Oct 04.
Artigo em Inglês | MEDLINE | ID: mdl-37714184

RESUMO

Objective.Computed tomography (CT) is a widely employed imaging technology for disease detection. However, CT images often suffer from ring artifacts, which may result from hardware defects and other factors. These artifacts compromise image quality and impede diagnosis. To address this challenge, we propose a novel method based on dual contrast learning image style transformation network model (DCLGAN) that effectively eliminates ring artifacts from CT images while preserving texture details.Approach. Our method involves simulating ring artifacts on real CT data to generate the uncorrected CT (uCT) data and transforming them into strip artifacts. Subsequently, the DCLGAN synthetic network is applied in the polar coordinate system to remove the strip artifacts and generate a synthetic CT (sCT). We compare the uCT and sCT images to obtain a residual image, which is then filtered to extract the strip artifacts. An inverse polar transformation is performed to obtain the ring artifacts, which are subtracted from the original CT image to produce a corrected image.Main results.To validate the effectiveness of our approach, we tested it using real CT data, simulated data, and cone beam computed tomography images of the patient's brain. The corrected CT images showed a reduction in mean absolute error by 12.36 Hounsfield units (HU), a decrease in root mean square error by 18.94 HU, an increase in peak signal-to-noise ratio by 3.53 decibels (dB), and an improvement in structural similarity index by 9.24%.Significance.These results demonstrate the efficacy of our method in eliminating ring artifacts and preserving image details, making it a valuable tool for CT imaging.

4.
Comput Biol Med ; 165: 107377, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37651766

RESUMO

PURPOSE: Cone-beam computed tomography (CBCT) is widely utilized in modern radiotherapy; however, CBCT images exhibit increased scatter artifacts compared to planning CT (pCT), compromising image quality and limiting further applications. Scatter correction is thus crucial for improving CBCT image quality. METHODS: In this study, we proposed an unsupervised contrastive learning method for CBCT scatter correction. Initially, we transformed low-quality CBCT into high-quality synthetic pCT (spCT) and generated forward projections of CBCT and spCT. By computing the difference between these projections, we obtained a residual image containing image details and scatter artifacts. Image details primarily comprise high-frequency signals, while scatter artifacts consist mainly of low-frequency signals. We extracted the scatter projection signal by applying a low-pass filter to remove image details. The corrected CBCT (cCBCT) projection signal was obtained by subtracting the scatter artifacts projection signal from the original CBCT projection. Finally, we employed the FDK reconstruction algorithm to generate the cCBCT image. RESULTS: To evaluate cCBCT image quality, we aligned the CBCT and pCT of six patients. In comparison to CBCT, cCBCT maintains anatomical consistency and significantly enhances CT number, spatial homogeneity, and artifact suppression. The mean absolute error (MAE) of the test data decreased from 88.0623 ± 26.6700 HU to 17.5086 ± 3.1785 HU. The MAE of fat regions of interest (ROIs) declined from 370.2980 ± 64.9730 HU to 8.5149 ± 1.8265 HU, and the error between their maximum and minimum CT numbers decreased from 572.7528 HU to 132.4648 HU. The MAE of muscle ROIs reduced from 354.7689 ± 25.0139 HU to 16.4475 ± 3.6812 HU. We also compared our proposed method with several conventional unsupervised synthetic image generation techniques, demonstrating superior performance. CONCLUSIONS: Our approach effectively enhances CBCT image quality and shows promising potential for future clinical adoption.


Assuntos
Algoritmos , Tomografia Computadorizada de Feixe Cônico , Humanos , Tomografia Computadorizada de Feixe Cônico/métodos , Artefatos , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Espalhamento de Radiação
5.
Comput Biol Med ; 161: 106888, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37244146

RESUMO

X-ray Computed Tomography (CT) techniques play a vitally important role in clinical diagnosis, but radioactivity exposure can also induce the risk of cancer for patients. Sparse-view CT reduces the impact of radioactivity on the human body through sparsely sampled projections. However, images reconstructed from sparse-view sinograms often suffer from serious streaking artifacts. To overcome this issue, we propose an end-to-end attention-based mechanism deep network for image correction in this paper. Firstly, the process is to reconstruct the sparse projection by the filtered back-projection algorithm. Next, the reconstructed results are fed into the deep network for artifact correction. More specifically, we integrate the attention-gating module into U-Net pipelines, whose function is implicitly learning to emphasize relevant features beneficial for a given assignment while restraining background regions. Attention is used to combine the local feature vectors extracted at intermediate stages in the convolutional neural network and the global feature vector extracted from the coarse scale activation map. To improve the performance of our network, we fused a pre-trained ResNet50 model into our architecture. The model was trained and tested using the dataset from The Cancer Imaging Archive (TCIA), which consists of images of various human organs obtained from multiple views. This experience demonstrates that the developed functions are highly effective in removing streaking artifacts while preserving structural details. Additionally, quantitative evaluation of our proposed model shows significant improvement in peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and root mean squared error (RMSE) metrics compared to other methods, with an average PSNR of 33.9538, SSIM of 0.9435, and RMSE of 45.1208 at 20 views. Finally, the transferability of the network was verified using the 2016 AAPM dataset. Therefore, this approach holds great promise in achieving high-quality sparse-view CT images.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Artefatos
6.
Bioengineering (Basel) ; 10(2)2023 Jan 21.
Artigo em Inglês | MEDLINE | ID: mdl-36829638

RESUMO

Two-dimensional (2D)/three-dimensional (3D) registration is critical in clinical applications. However, existing methods suffer from long alignment times and high doses. In this paper, a non-rigid 2D/3D registration method based on deep learning with orthogonal angle projections is proposed. The application can quickly achieve alignment using only two orthogonal angle projections. We tested the method with lungs (with and without tumors) and phantom data. The results show that the Dice and normalized cross-correlations are greater than 0.97 and 0.92, respectively, and the registration time is less than 1.2 seconds. In addition, the proposed model showed the ability to track lung tumors, highlighting the clinical potential of the proposed method.

7.
Comput Biol Med ; 155: 106710, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36842222

RESUMO

PURPOSE: Metal artifacts can significantly decrease the quality of computed tomography (CT) images. This occurs as X-rays penetrate implanted metals, causing severe attenuation and resulting in metal artifacts in the CT images. This degradation in image quality can hinder subsequent clinical diagnosis and treatment planning. Beam hardening artifacts are often manifested as severe strip artifacts in the image domain, affecting the overall quality of the reconstructed CT image. In the sinogram domain, metal is typically located in specific areas, and image processing in these regions can preserve image information in other areas, making the model more robust. To address this issue, we propose a region-based correction of beam hardening artifacts in the sinogram domain using deep learning. METHODS: We present a model composed of three modules: (a) a Sinogram Metal Segmentation Network (Seg-Net), (b) a Sinogram Enhancement Network (Sino-Net), and (c) a Fusion Module. The model starts by using the Attention U-Net network to segment the metal regions in the sinogram. The segmented metal regions are then interpolated to obtain a sinogram image free of metal. The Sino-Net is then applied to compensate for the loss of organizational and artifact information in the metal regions. The corrected metal sinogram and the interpolated metal-free sinogram are then used to reconstruct the metal CT and metal-free CT images, respectively. Finally, the Fusion Module combines the two CT images to produce the result. RESULTS: Our proposed method shows strong performance in both qualitative and quantitative evaluations. The peak signal-to-noise ratio (PSNR) of the CT image before and after correction was 18.22 and 30.32, respectively. The structural similarity index measure (SSIM) improved from 0.75 to 0.99, and the weighted peak signal-to-noise ratio (WPSNR) increased from 21.69 to 35.68. CONCLUSIONS: Our proposed method demonstrates the reliability of high-accuracy correction of beam hardening artifacts.


Assuntos
Artefatos , Aprendizado Profundo , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X/métodos , Metais , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Algoritmos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA