Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
EJNMMI Phys ; 11(1): 67, 2024 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-39052194

RESUMEN

PURPOSE: Effective radiation therapy requires accurate segmentation of head and neck cancer, one of the most common types of cancer. With the advancement of deep learning, people have come up with various methods that use positron emission tomography-computed tomography to get complementary information. However, these approaches are computationally expensive because of the separation of feature extraction and fusion functions and do not make use of the high sensitivity of PET. We propose a new deep learning-based approach to alleviate these challenges. METHODS: We proposed a tumor region attention module that fully exploits the high sensitivity of PET and designed a network that learns the correlation between the PET and CT features using squeeze-and-excitation normalization (SE Norm) without separating the feature extraction and fusion functions. In addition, we introduce multi-scale context fusion, which exploits contextual information from different scales. RESULTS: The HECKTOR challenge 2021 dataset was used for training and testing. The proposed model outperformed the state-of-the-art models for medical image segmentation; in particular, the dice similarity coefficient increased by 8.78% compared to U-net. CONCLUSION: The proposed network segmented the complex shape of the tumor better than the state-of-the-art medical image segmentation methods, accurately distinguishing between tumor and non-tumor regions.

2.
Phys Med Biol ; 2024 Sep 23.
Artículo en Inglés | MEDLINE | ID: mdl-39312947

RESUMEN

Bone scans play an important role in skeletal lesion assessment, but gamma cameras exhibit challenges with low sensitivity and high noise levels. Deep learning (DL) has emerged as a promising solution to enhance image quality without increasing radiation exposure or scan time. However, existing self-supervised denoising methods, such as Noise2Noise (N2N), may introduce deviations from the clinical standard in bone scans. This study proposes an improved self-supervised denoising technique to minimize discrepancies between DL-based denoising and full scan images. Retrospective analysis of 351 whole-body bone scan data sets was conducted. In this study, we used N2N and Noise2FullCount (N2F) denoising models, along with an interpolated version of N2N (iN2N). Denoising networks were separately trained for each reduced scan time from 5 to 50%, and also trained for mixed training datasets, which include all shortened scans. We performed quantitative analysis and clinical evaluation by nuclear medicine experts. The denoising networks effectively generated images resembling full scans, with N2F revealing distinctive patterns for different scan times, N2N producing smooth textures with slight blurring, and iN2N closely mirroring full scan patterns. Quantitative analysis showed that denoising improved with longer input times and mixed count training outperformed fixed count training. Traditional denoising methods lagged behind DL-based denoising. N2N demonstrated limitations in long-scan images. Clinical evaluation favored N2N and iN2N in resolution, noise, blurriness, and findings, showcasing their potential for enhanced diagnostic performance in quarter-time scans. The improved self-supervised denoising technique presented in this study offers a viable solution to enhance bone scan image quality, minimizing deviations from clinical standards. The method's effectiveness was demonstrated quantitatively and clinically, showing promise for quarter-time scans without compromising diagnostic performance. This approach holds potential for improving bone scan interpretations, aiding in more accurate clinical diagnoses.

3.
Nucl Med Mol Imaging ; 54(6): 299-304, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-33282001

RESUMEN

PURPOSE: Early deep-learning-based image denoising techniques mainly focused on a fully supervised model that learns how to generate a clean image from the noisy input (noise2clean: N2C). The aim of this study is to explore the feasibility of the self-supervised methods (noise2noise: N2N and noiser2noise: Nr2N) for PET image denoising based on the measured PET data sets by comparing their performance with the conventional N2C model. METHODS: For training and evaluating the networks, 18F-FDG brain PET/CT scan data of 14 patients was retrospectively used (10 for training and 4 for testing). From the 60-min list-mode data, we generated a total of 100 data bins with 10-s duration. We also generated 40-s-long data by adding four non-overlapping 10-s bins and 300-s-long reference data by adding all list-mode data. We employed U-Net that is widely used for various tasks in biomedical imaging to train and test proposed denoising models. RESULTS: All the N2C, N2N, and Nr2N were effective for improving the noisy inputs. While N2N showed equivalent PSNR to the N2C in all the noise levels, Nr2N yielded higher SSIM than N2N. N2N yielded denoised images similar to reference image with Gaussian filtering regardless of input noise level. Image contrast was better in the N2N results. CONCLUSION: The self-supervised denoising method will be useful for reducing the PET scan time or radiation dose.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA