Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Phys Med Biol ; 69(17)2024 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-39084640

RESUMEN

Objective.This work proposes, for the first time, an image-based end-to-end self-normalization framework for positron emission tomography (PET) using conditional generative adversarial networks (cGANs).Approach.We evaluated different approaches by exploring each of the following three methodologies. First, we used images that were either unnormalized or corrected for geometric factors, which encompass all time-invariant factors, as input data types. Second, we set the input tensor shape as either a single axial slice (2D) or three contiguous axial slices (2.5D). Third, we chose either Pix2Pix or polarized self-attention (PSA) Pix2Pix, which we developed for this work, as a deep learning network. The targets for all approaches were the axial slices of images normalized using the direct normalization method. We performed Monte Carlo simulations of ten voxelized phantoms with the SimSET simulation tool and produced 26,000 pairs of axial image slices for training and testing.Main results.The results showed that 2.5D PSA Pix2Pix trained with geometric-factors-corrected input images achieved the best performance among all the methods we tested. All approaches improved general image quality figures of merit peak signal to noise ratio (PSNR) and structural similarity index (SSIM) from ∼15 % to ∼55 %, and 2.5D PSA Pix2Pix showed the highest PSNR (28.074) and SSIM (0.921). Lesion detectability, measured with region of interest (ROI) PSNR, SSIM, normalized contrast recovery coefficient, and contrast-to-noise ratio, was generally improved for all approaches, and 2.5D PSA Pix2Pix trained with geometric-factors-corrected input images achieved the highest ROI PSNR (28.920) and SSIM (0.973).Significance.This study demonstrates the potential of an image-based end-to-end self-normalization framework using cGANs for improving PET image quality and lesion detectability without the need for separate normalization scans.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Tomografía de Emisión de Positrones , Procesamiento de Imagen Asistido por Computador/métodos , Humanos , Fantasmas de Imagen , Método de Montecarlo
2.
Ann Nucl Med ; 2024 Jun 06.
Artículo en Inglés | MEDLINE | ID: mdl-38842629

RESUMEN

BACKGROUND: Cardiac positron emission tomography (PET) can visualize and quantify the molecular and physiological pathways of cardiac function. However, cardiac and respiratory motion can introduce blurring that reduces PET image quality and quantitative accuracy. Dual cardiac- and respiratory-gated PET reconstruction can mitigate motion artifacts but increases noise as only a subset of data are used for each time frame of the cardiac cycle. AIM: The objective of this study is to create a zero-shot image denoising framework using a conditional generative adversarial networks (cGANs) for improving image quality and quantitative accuracy in non-gated and dual-gated cardiac PET images. METHODS: Our study included retrospective list-mode data from 40 patients who underwent an 18F-fluorodeoxyglucose (18F-FDG) cardiac PET study. We initially trained and evaluated a 3D cGAN-known as Pix2Pix-on simulated non-gated low-count PET data paired with corresponding full-count target data, and then deployed the model on an unseen test set acquired on the same PET/CT system including both non-gated and dual-gated PET data. RESULTS: Quantitative analysis demonstrated that the 3D Pix2Pix network architecture achieved significantly (p value<0.05) enhanced image quality and accuracy in both non-gated and gated cardiac PET images. At 5%, 10%, and 15% preserved count statistics, the model increased peak signal-to-noise ratio (PSNR) by 33.7%, 21.2%, and 15.5%, structural similarity index (SSIM) by 7.1%, 3.3%, and 2.2%, and reduced mean absolute error (MAE) by 61.4%, 54.3%, and 49.7%, respectively. When tested on dual-gated PET data, the model consistently reduced noise, irrespective of cardiac/respiratory motion phases, while maintaining image resolution and accuracy. Significant improvements were observed across all gates, including a 34.7% increase in PSNR, a 7.8% improvement in SSIM, and a 60.3% reduction in MAE. CONCLUSION: The findings of this study indicate that dual-gated cardiac PET images, which often have post-reconstruction artifacts potentially affecting diagnostic performance, can be effectively improved using a generative pre-trained denoising network.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA