Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Med Phys ; 50(8): 5002-5019, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36734321

RESUMO

BACKGROUND: Cone beam computed tomography (CBCT) plays an increasingly important role in image-guided radiation therapy. However, the image quality of CBCT is severely degraded by excessive scatter contamination, especially in the abdominal region, hindering its further applications in radiation therapy. PURPOSE: To restore low-quality CBCT images contaminated by scatter signals, a scatter correction algorithm combining the advantages of convolutional neural networks (CNN) and Swin Transformer is proposed. METHODS: In this paper a scatter correction model for CBCT image, the Flip Swin Transformer U-shape network (FSTUNet) model, is proposed. In this model, the advantages of CNN in texture detail and Swin Transformer in global correlation are used to accurately extract shallow and deep features, respectively. Instead of using the original Swin Transformer tandem structure, we build the Flip Swin Transformer Block to achieve a more powerful inter-window association extraction. The validity and clinical relevance of the method is demonstrated through extensive experiments on a Monte Carlo (MC) simulation dataset and frequency split dataset generated by a validated method, respectively. RESULT: Experimental results on the MC simulated dataset show that the root mean square error of images corrected by the method is reduced from over 100 HU to about 7 HU. Both the structural similarity index measure (SSIM) and the universal quality index (UQI) are close to 1. Experimental results on the frequency split dataset demonstrate that the method not only corrects shading artifacts but also exhibits a high degree of structural consistency. In addition, comparison experiments show that FSTUNet outperforms UNet, Deep Residual Convolutional Neural Network (DRCNN), DSENet, Pix2pixGAN, and 3DUnet methods in both qualitative and quantitative metrics. CONCLUSIONS: Accurately capturing the features at different levels is greatly beneficial for reconstructing high-quality scatter-free images. The proposed FSTUNet method is an effective solution to CBCT scatter correction and has the potential to improve the accuracy of CBCT image-guided radiation therapy.


Assuntos
Algoritmos , Redes Neurais de Computação , Espalhamento de Radiação , Imagens de Fantasmas , Tomografia Computadorizada de Feixe Cônico/métodos
2.
Comput Med Imaging Graph ; 103: 102150, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36493595

RESUMO

Magnetic resonance (MR) image-guided radiation therapy is a hot topic in current radiation therapy research, which relies on MR to generate synthetic computed tomography (SCT) images for radiation therapy. Convolution-based generative adversarial networks (GAN) have achieved promising results in synthesizing CT from MR since the introduction of deep learning techniques. However, due to the local limitations of pure convolutional neural networks (CNN) structure and the local mismatch between paired MR and CT images, particularly in pelvic soft tissue, the performance of GAN in synthesizing CT from MR requires further improvement. In this paper, we propose a new GAN called Residual Transformer Conditional GAN (RTCGAN), which exploits the advantages of CNN in local texture details and Transformer in global correlation to extract multi-level features from MR and CT images. Furthermore, the feature reconstruction loss is used to further constrain the image potential features, reducing over-smoothing and local distortion of the SCT. The experiments show that RTCGAN is visually closer to the reference CT (RCT) image and achieves desirable results on local mismatch tissues. In the quantitative evaluation, the MAE, SSIM, and PSNR of RTCGAN are 45.05 HU, 0.9105, and 28.31 dB, respectively. All of them outperform other comparison methods, such as deep convolutional neural networks (DCNN), Pix2Pix, Attention-UNet, WPD-DAGAN, and HDL.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Espectroscopia de Ressonância Magnética
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA