Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Med Image Anal ; 91: 102998, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37857066

RESUMO

Radiotherapy serves as a pivotal treatment modality for malignant tumors. However, the accuracy of radiotherapy is significantly compromised due to respiratory-induced fluctuations in the size, shape, and position of the tumor. To address this challenge, we introduce a deep learning-anchored, volumetric tumor tracking methodology that employs single-angle X-ray projection images. This process involves aligning the intraoperative two-dimensional (2D) X-ray images with the pre-treatment three-dimensional (3D) planning Computed Tomography (CT) scans, enabling the extraction of the 3D tumor position and segmentation. Prior to therapy, a bespoke patient-specific tumor tracking model is formulated, leveraging a hybrid data augmentation, style correction, and registration network to create a mapping from single-angle 2D X-ray images to the corresponding 3D tumors. During the treatment phase, real-time X-ray images are fed into the trained model, producing the respective 3D tumor positioning. Rigorous validation conducted on actual patient lung data and lung phantoms attests to the high localization precision of our method at lowered radiation doses, thus heralding promising strides towards enhancing the precision of radiotherapy.


Assuntos
Aprendizado Profundo , Neoplasias , Humanos , Imageamento Tridimensional/métodos , Raios X , Tomografia Computadorizada por Raios X/métodos , Neoplasias/diagnóstico por imagem , Neoplasias/radioterapia , Tomografia Computadorizada de Feixe Cônico/métodos
2.
Bioengineering (Basel) ; 10(2)2023 Jan 21.
Artigo em Inglês | MEDLINE | ID: mdl-36829638

RESUMO

Two-dimensional (2D)/three-dimensional (3D) registration is critical in clinical applications. However, existing methods suffer from long alignment times and high doses. In this paper, a non-rigid 2D/3D registration method based on deep learning with orthogonal angle projections is proposed. The application can quickly achieve alignment using only two orthogonal angle projections. We tested the method with lungs (with and without tumors) and phantom data. The results show that the Dice and normalized cross-correlations are greater than 0.97 and 0.92, respectively, and the registration time is less than 1.2 seconds. In addition, the proposed model showed the ability to track lung tumors, highlighting the clinical potential of the proposed method.

3.
Phys Med Biol ; 67(5)2022 03 03.
Artigo em Inglês | MEDLINE | ID: mdl-35172290

RESUMO

Objective.Four-dimensional cone-beam computed tomography (4D CBCT) has unique advantages in moving target localization, tracking and therapeutic dose accumulation in adaptive radiotherapy. However, the severe fringe artifacts and noise degradation caused by 4D CBCT reconstruction restrict its clinical application. We propose a novel deep unsupervised learning model to generate the high-quality 4D CBCT from the poor-quality 4D CBCT.Approach.The proposed model uses a contrastive loss function to preserve the anatomical structure in the corrected image. To preserve the relationship between the input and output image, we use a multilayer, patch-based method rather than operate on entire images. Furthermore, we draw negatives from within the input 4D CBCT rather than from the rest of the dataset.Main results.The results showed that the streak and motion artifacts were significantly suppressed. The spatial resolution of the pulmonary vessels and microstructure were also improved. To demonstrate the results in the different directions, we make the animation to show the different views of the predicted correction image in the supplementary animation.Significance.The proposed method can be integrated into any 4D CBCT reconstruction method and maybe a practical way to enhance the image quality of the 4D CBCT.


Assuntos
Artefatos , Tomografia Computadorizada de Feixe Cônico Espiral , Tomografia Computadorizada Quadridimensional , Movimento (Física) , Aprendizado de Máquina não Supervisionado
4.
Quant Imaging Med Surg ; 11(12): 4709-4720, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34888183

RESUMO

BACKGROUND: In the radiotherapy of nasopharyngeal carcinoma (NPC), magnetic resonance imaging (MRI) is widely used to delineate tumor area more accurately. While MRI offers the higher soft tissue contrast, patient positioning and couch correction based on bony image fusion of computed tomography (CT) is also necessary. There is thus an urgent need to obtain a high image contrast between bone and soft tissue to facilitate target delineation and patient positioning for NPC radiotherapy. In this paper, our aim is to develop a novel image conversion between the CT and MRI modalities to obtain clear bone and soft tissue images simultaneously, here called bone-enhanced MRI (BeMRI). METHODS: Thirty-five patients were retrospectively selected for this study. All patients underwent clinical CT simulation and 1.5T MRI within the same week in Shenzhen Second People's Hospital. To synthesize BeMRI, two deep learning networks, U-Net and CycleGAN, were constructed to transform MRI to synthetic CT (sCT) images. Each network used 28 patients' images as the training set, while the remaining 7 patients were used as the test set (~1/5 of all datasets). The bone structure from the sCT was then extracted by the threshold-based method and embedded in the corresponding part of the MRI image to generate the BeMRI image. To evaluate the performance of these networks, the following metrics were applied: mean absolute error (MAE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR). RESULTS: In our experiments, both deep learning models achieved good performance and were able to effectively extract bone structure from MRI. Specifically, the supervised U-Net model achieved the best results with the lowest overall average MAE of 125.55 (P<0.05) and produced the highest SSIM of 0.89 and PSNR of 23.84. These results indicate that BeMRI can display bone structure in higher contrast than conventional MRI. CONCLUSIONS: A new image modality BeMRI, which is a composite image of CT and MRI, was proposed. With high image contrast of both bone structure and soft tissues, BeMRI will facilitate tumor localization and patient positioning and eliminate the need to frequently check between separate MRI and CT images during NPC radiotherapy.

5.
Front Oncol ; 11: 686875, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34350115

RESUMO

PURPOSE: In recent years, cone-beam computed tomography (CBCT) is increasingly used in adaptive radiation therapy (ART). However, compared with planning computed tomography (PCT), CBCT image has much more noise and imaging artifacts. Therefore, it is necessary to improve the image quality and HU accuracy of CBCT. In this study, we developed an unsupervised deep learning network (CycleGAN) model to calibrate CBCT images for the pelvis to extend potential clinical applications in CBCT-guided ART. METHODS: To train CycleGAN to generate synthetic PCT (sPCT), we used CBCT and PCT images as inputs from 49 patients with unpaired data. Additional deformed PCT (dPCT) images attained as CBCT after deformable registration are utilized as the ground truth before evaluation. The trained uncorrected CBCT images are converted into sPCT images, and the obtained sPCT images have the characteristics of PCT images while keeping the anatomical structure of CBCT images unchanged. To demonstrate the effectiveness of the proposed CycleGAN, we use additional nine independent patients for testing. RESULTS: We compared the sPCT with dPCT images as the ground truth. The average mean absolute error (MAE) of the whole image on testing data decreased from 49.96 ± 7.21HU to 14.6 ± 2.39HU, the average MAE of fat and muscle ROIs decreased from 60.23 ± 7.3HU to 16.94 ± 7.5HU, and from 53.16 ± 9.1HU to 13.03 ± 2.63HU respectively. CONCLUSION: We developed an unsupervised learning method to generate high-quality corrected CBCT images (sPCT). Through further evaluation and clinical implementation, it can replace CBCT in ART.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA