Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Bases de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Quant Imaging Med Surg ; 14(7): 4579-4604, 2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39022265

RESUMEN

Background: The information between multimodal magnetic resonance imaging (MRI) is complementary. Combining multiple modalities for brain tumor image segmentation can improve segmentation accuracy, which has great significance for disease diagnosis and treatment. However, different degrees of missing modality data often occur in clinical practice, which may lead to serious performance degradation or even failure of brain tumor segmentation methods relying on full-modality sequences to complete the segmentation task. To solve the above problems, this study aimed to design a new deep learning network for incomplete multimodal brain tumor segmentation. Methods: We propose a novel cross-modal attention fusion-based deep neural network (CMAF-Net) for incomplete multimodal brain tumor segmentation, which is based on a three-dimensional (3D) U-Net architecture with encoding and decoding structure, a 3D Swin block, and a cross-modal attention fusion (CMAF) block. A convolutional encoder is initially used to extract the specific features from different modalities, and an effective 3D Swin block is constructed to model the long-range dependencies to obtain richer information for brain tumor segmentation. Then, a cross-attention based CMAF module is proposed that can deal with different missing modality situations by fusing features between different modalities to learn the shared representations of the tumor regions. Finally, the fused latent representation is decoded to obtain the final segmentation result. Additionally, channel attention module (CAM) and spatial attention module (SAM) are incorporated into the network to further improve the robustness of the model; the CAM to help focus on important feature channels, and the SAM to learn the importance of different spatial regions. Results: Evaluation experiments on the widely-used BraTS 2018 and BraTS 2020 datasets demonstrated the effectiveness of the proposed CMAF-Net which achieved average Dice scores of 87.9%, 81.8%, and 64.3%, as well as Hausdorff distances of 4.21, 5.35, and 4.02 for whole tumor, tumor core, and enhancing tumor on the BraTS 2020 dataset, respectively, outperforming several state-of-the-art segmentation methods in missing modalities situations. Conclusions: The experimental results show that the proposed CMAF-Net can achieve accurate brain tumor segmentation in the case of missing modalities with promising application potential.

2.
Radiat Oncol ; 19(1): 20, 2024 Feb 09.
Artículo en Inglés | MEDLINE | ID: mdl-38336759

RESUMEN

OBJECTIVE: This study aimed to present a deep-learning network called contrastive learning-based cycle generative adversarial networks (CLCGAN) to mitigate streak artifacts and correct the CT value in four-dimensional cone beam computed tomography (4D-CBCT) for dose calculation in lung cancer patients. METHODS: 4D-CBCT and 4D computed tomography (CT) of 20 patients with locally advanced non-small cell lung cancer were used to paired train the deep-learning model. The lung tumors were located in the right upper lobe, right lower lobe, left upper lobe, and left lower lobe, or in the mediastinum. Additionally, five patients to create 4D synthetic computed tomography (sCT) for test. Using the 4D-CT as the ground truth, the quality of the 4D-sCT images was evaluated by quantitative and qualitative assessment methods. The correction of CT values was evaluated holistically and locally. To further validate the accuracy of the dose calculations, we compared the dose distributions and calculations of 4D-CBCT and 4D-sCT with those of 4D-CT. RESULTS: The structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) of the 4D-sCT increased from 87% and 22.31 dB to 98% and 29.15 dB, respectively. Compared with cycle consistent generative adversarial networks, CLCGAN enhanced SSIM and PSNR by 1.1% (p < 0.01) and 0.42% (p < 0.01). Furthermore, CLCGAN significantly decreased the absolute mean differences of CT value in lungs, bones, and soft tissues. The dose calculation results revealed a significant improvement in 4D-sCT compared to 4D-CBCT. CLCGAN was the most accurate in dose calculations for left lung (V5Gy), right lung (V5Gy), right lung (V20Gy), PTV (D98%), and spinal cord (D2%), with the relative dose difference were reduced by 6.84%, 3.84%, 1.46%, 0.86%, 3.32% compared to 4D-CBCT. CONCLUSIONS: Based on the satisfactory results obtained in terms of image quality, CT value measurement, it can be concluded that CLCGAN-based corrected 4D-CBCT can be utilized for dose calculation in lung cancer.


Asunto(s)
Carcinoma de Pulmón de Células no Pequeñas , Neoplasias Pulmonares , Tomografía Computarizada de Haz Cónico Espiral , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/radioterapia , Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada Cuatridimensional , Planificación de la Radioterapia Asistida por Computador/métodos
3.
Radiat Oncol ; 19(1): 66, 2024 May 29.
Artículo en Inglés | MEDLINE | ID: mdl-38811994

RESUMEN

OBJECTIVES: Accurate segmentation of the clinical target volume (CTV) of CBCT images can observe the changes of CTV during patients' radiotherapy, and lay a foundation for the subsequent implementation of adaptive radiotherapy (ART). However, segmentation is challenging due to the poor quality of CBCT images and difficulty in obtaining target volumes. An uncertainty estimation- and attention-based semi-supervised model called residual convolutional block attention-uncertainty aware mean teacher (RCBA-UAMT) was proposed to delineate the CTV in cone-beam computed tomography (CBCT) images of breast cancer automatically. METHODS: A total of 60 patients who undergone radiotherapy after breast-conserving surgery were enrolled in this study, which involved 60 planning CTs and 380 CBCTs. RCBA-UAMT was proposed by integrating residual and attention modules in the backbone network 3D UNet. The attention module can adjust channel and spatial weights of the extracted image features. The proposed design can train the model and segment CBCT images with a small amount of labeled data (5%, 10%, and 20%) and a large amount of unlabeled data. Four types of evaluation metrics, namely, dice similarity coefficient (DSC), Jaccard, average surface distance (ASD), and 95% Hausdorff distance (95HD), are used to assess the model segmentation performance quantitatively. RESULTS: The proposed method achieved average DSC, Jaccard, 95HD, and ASD of 82%, 70%, 8.93, and 1.49 mm for CTV delineation on CBCT images of breast cancer, respectively. Compared with the three classical methods of mean teacher, uncertainty-aware mean-teacher and uncertainty rectified pyramid consistency, DSC and Jaccard increased by 7.89-9.33% and 14.75-16.67%, respectively, while 95HD and ASD decreased by 33.16-67.81% and 36.05-75.57%, respectively. The comparative experiment results of the labeled data with different proportions (5%, 10% and 20%) showed significant differences in the DSC, Jaccard, and 95HD evaluation indexes in the labeled data with 5% versus 10% and 5% versus 20%. Moreover, no significant differences were observed in the labeled data with 10% versus 20% among all evaluation indexes. Therefore, we can use only 10% labeled data to achieve the experimental objective. CONCLUSIONS: Using the proposed RCBA-UAMT, the CTV of breast cancer CBCT images can be delineated reliably with a small amount of labeled data. These delineated images can be used to observe the changes in CTV and lay the foundation for the follow-up implementation of ART.


Asunto(s)
Neoplasias de la Mama , Tomografía Computarizada de Haz Cónico , Planificación de la Radioterapia Asistida por Computador , Humanos , Tomografía Computarizada de Haz Cónico/métodos , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/radioterapia , Neoplasias de la Mama/patología , Femenino , Planificación de la Radioterapia Asistida por Computador/métodos , Incertidumbre , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA