Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 41
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Eur J Nucl Med Mol Imaging ; 51(2): 358-368, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37787849

RESUMO

PURPOSE: Due to various physical degradation factors and limited counts received, PET image quality needs further improvements. The denoising diffusion probabilistic model (DDPM) was a distribution learning-based model, which tried to transform a normal distribution into a specific data distribution based on iterative refinements. In this work, we proposed and evaluated different DDPM-based methods for PET image denoising. METHODS: Under the DDPM framework, one way to perform PET image denoising was to provide the PET image and/or the prior image as the input. Another way was to supply the prior image as the network input with the PET image included in the refinement steps, which could fit for scenarios of different noise levels. 150 brain [[Formula: see text]F]FDG datasets and 140 brain [[Formula: see text]F]MK-6240 (imaging neurofibrillary tangles deposition) datasets were utilized to evaluate the proposed DDPM-based methods. RESULTS: Quantification showed that the DDPM-based frameworks with PET information included generated better results than the nonlocal mean, Unet and generative adversarial network (GAN)-based denoising methods. Adding additional MR prior in the model helped achieved better performance and further reduced the uncertainty during image denoising. Solely relying on MR prior while ignoring the PET information resulted in large bias. Regional and surface quantification showed that employing MR prior as the network input while embedding PET image as a data-consistency constraint during inference achieved the best performance. CONCLUSION: DDPM-based PET image denoising is a flexible framework, which can efficiently utilize prior information and achieve better performance than the nonlocal mean, Unet and GAN-based denoising methods.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Razão Sinal-Ruído , Modelos Estatísticos , Algoritmos
2.
NMR Biomed ; 35(4): e4224, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-31865615

RESUMO

Arterial spin labeling (ASL) imaging is a powerful magnetic resonance imaging technique that allows to quantitatively measure blood perfusion non-invasively, which has great potential for assessing tissue viability in various clinical settings. However, the clinical applications of ASL are currently limited by its low signal-to-noise ratio (SNR), limited spatial resolution, and long imaging time. In this work, we propose an unsupervised deep learning-based image denoising and reconstruction framework to improve the SNR and accelerate the imaging speed of high resolution ASL imaging. The unique feature of the proposed framework is that it does not require any prior training pairs but only the subject's own anatomical prior, such as T1-weighted images, as network input. The neural network was trained from scratch in the denoising or reconstruction process, with noisy images or sparely sampled k-space data as training labels. Performance of the proposed method was evaluated using in vivo experiment data obtained from 3 healthy subjects on a 3T MR scanner, using ASL images acquired with 44-min acquisition time as the ground truth. Both qualitative and quantitative analyses demonstrate the superior performance of the proposed txtc framework over the reference methods. In summary, our proposed unsupervised deep learning-based denoising and reconstruction framework can improve the image quality and accelerate the imaging speed of ASL imaging.


Assuntos
Aprendizado Profundo , Encéfalo , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Razão Sinal-Ruído , Marcadores de Spin
3.
Neuroimage ; 240: 118380, 2021 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-34252526

RESUMO

Parametric imaging based on dynamic positron emission tomography (PET) has wide applications in neurology. Compared to indirect methods, direct reconstruction methods, which reconstruct parametric images directly from the raw PET data, have superior image quality due to better noise modeling and richer information extracted from the PET raw data. For low-dose scenarios, the advantages of direct methods are more obvious. However, the wide adoption of direct reconstruction is inevitably impeded by the excessive computational demand and deficiency of the accessible raw data. In addition, motion modeling inside dynamic PET image reconstruction raises more computational challenges for direct reconstruction methods. In this work, we focused on the 18F-FDG Patlak model, and proposed a data-driven approach which can estimate the motion corrected full-dose direct Patlak images from the dynamic PET reconstruction series, based on a proposed novel temporal non-local convolutional neural network. During network training, direct reconstruction with motion correction based on full-dose dynamic PET sinograms was performed to obtain the training labels. The reconstructed full-dose /low-dose dynamic PET images were supplied as the network input. In addition, a temporal non-local block based on the dynamic PET images was proposed to better recover the structural information and reduce the image noise. During testing, the proposed network can directly output high-quality Patlak parametric images from the full-dose /low-dose dynamic PET images in seconds. Experiments based on 15 full-dose and 15 low-dose 18F-FDG brain datasets were conducted and analyzed to validate the feasibility of the proposed framework. Results show that the proposed framework can generate better image quality than reference methods.


Assuntos
Encéfalo/diagnóstico por imagem , Encéfalo/metabolismo , Interpretação Estatística de Dados , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Tomografia por Emissão de Pósitrons/métodos , Feminino , Humanos , Masculino
4.
Eur J Nucl Med Mol Imaging ; 48(5): 1351-1361, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33108475

RESUMO

PURPOSE: PET measures of amyloid and tau pathologies are powerful biomarkers for the diagnosis and monitoring of Alzheimer's disease (AD). Because cortical regions are close to bone, quantitation accuracy of amyloid and tau PET imaging can be significantly influenced by errors of attenuation correction (AC). This work presents an MR-based AC method that combines deep learning with a novel ultrashort time-to-echo (UTE)/multi-echo Dixon (mUTE) sequence for amyloid and tau imaging. METHODS: Thirty-five subjects that underwent both 11C-PiB and 18F-MK6240 scans were included in this study. The proposed method was compared with Dixon-based atlas method as well as magnetization-prepared rapid acquisition with gradient echo (MPRAGE)- or Dixon-based deep learning methods. The Dice coefficient and validation loss of the generated pseudo-CT images were used for comparison. PET error images regarding standardized uptake value ratio (SUVR) were quantified through regional and surface analysis to evaluate the final AC accuracy. RESULTS: The Dice coefficients of the deep learning methods based on MPRAGE, Dixon, and mUTE images were 0.84 (0.91), 0.84 (0.92), and 0.87 (0.94) for the whole-brain (above-eye) bone regions, respectively, higher than the atlas method of 0.52 (0.64). The regional SUVR error for the atlas method was around 6%, higher than the regional SUV error. The regional SUV and SUVR errors for all deep learning methods were below 2%, with mUTE-based deep learning method performing the best. As for the surface analysis, the atlas method showed the largest error (> 10%) near vertices inside superior frontal, lateral occipital, superior parietal, and inferior temporal cortices. The mUTE-based deep learning method resulted in the least number of regions with error higher than 1%, with the largest error (> 5%) showing up near the inferior temporal and medial orbitofrontal cortices. CONCLUSION: Deep learning with mUTE can generate accurate AC for amyloid and tau imaging in PET/MR.


Assuntos
Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Imagem Multimodal , Tomografia por Emissão de Pósitrons , Tomografia Computadorizada por Raios X
5.
Proc IEEE Inst Electr Electron Eng ; 108(1): 51-68, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38045770

RESUMO

Machine learning has found unique applications in nuclear medicine from photon detection to quantitative image reconstruction. While there have been impressive strides in detector development for time-of-flight positron emission tomography, most detectors still make use of simple signal processing methods to extract the time and position information from the detector signals. Now with the availability of fast waveform digitizers, machine learning techniques have been applied to estimate the position and arrival time of high-energy photons. In quantitative image reconstruction, machine learning has been used to estimate various corrections factors, including scattered events and attenuation images, as well as to reduce statistical noise in reconstructed images. Here machine learning either provides a faster alternative to an existing time-consuming computation, such as in the case of scatter estimation, or creates a data-driven approach to map an implicitly defined function, such as in the case of estimating the attenuation map for PET/MR scans. In this article, we will review the abovementioned applications of machine learning in nuclear medicine.

6.
Eur J Nucl Med Mol Imaging ; 46(13): 2780-2789, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31468181

RESUMO

PURPOSE: Image quality of positron emission tomography (PET) is limited by various physical degradation factors. Our study aims to perform PET image denoising by utilizing prior information from the same patient. The proposed method is based on unsupervised deep learning, where no training pairs are needed. METHODS: In this method, the prior high-quality image from the patient was employed as the network input and the noisy PET image itself was treated as the training label. Constrained by the network structure and the prior image input, the network was trained to learn the intrinsic structure information from the noisy image and output a restored PET image. To validate the performance of the proposed method, a computer simulation study based on the BrainWeb phantom was first performed. A 68Ga-PRGD2 PET/CT dataset containing 10 patients and a 18F-FDG PET/MR dataset containing 30 patients were later on used for clinical data evaluation. The Gaussian, non-local mean (NLM) using CT/MR image as priors, BM4D, and Deep Decoder methods were included as reference methods. The contrast-to-noise ratio (CNR) improvements were used to rank different methods based on Wilcoxon signed-rank test. RESULTS: For the simulation study, contrast recovery coefficient (CRC) vs. standard deviation (STD) curves showed that the proposed method achieved the best performance regarding the bias-variance tradeoff. For the clinical PET/CT dataset, the proposed method achieved the highest CNR improvement ratio (53.35% ± 21.78%), compared with the Gaussian (12.64% ± 6.15%, P = 0.002), NLM guided by CT (24.35% ± 16.30%, P = 0.002), BM4D (38.31% ± 20.26%, P = 0.002), and Deep Decoder (41.67% ± 22.28%, P = 0.002) methods. For the clinical PET/MR dataset, the CNR improvement ratio of the proposed method achieved 46.80% ± 25.23%, higher than the Gaussian (18.16% ± 10.02%, P < 0.0001), NLM guided by MR (25.36% ± 19.48%, P < 0.0001), BM4D (37.02% ± 21.38%, P < 0.0001), and Deep Decoder (30.03% ± 20.64%, P < 0.0001) methods. Restored images for all the datasets demonstrate that the proposed method can effectively smooth out the noise while recovering image details. CONCLUSION: The proposed unsupervised deep learning framework provides excellent image restoration effects, outperforming the Gaussian, NLM methods, BM4D, and Deep Decoder methods.


Assuntos
Aprendizado Profundo , Aumento da Imagem/métodos , Tomografia por Emissão de Pósitrons , Razão Sinal-Ruído , Aprendizado de Máquina não Supervisionado , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Neoplasias Pulmonares/diagnóstico por imagem , Masculino , Pessoa de Meia-Idade , Imagens de Fantasmas , Controle de Qualidade
7.
Phys Med Biol ; 69(15)2024 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-38959909

RESUMO

Objective.Head and neck (H&N) cancers are among the most prevalent types of cancer worldwide, and [18F]F-FDG PET/CT is widely used for H&N cancer management. Recently, the diffusion model has demonstrated remarkable performance in various image-generation tasks. In this work, we proposed a 3D diffusion model to accurately perform H&N tumor segmentation from 3D PET and CT volumes.Approach.The 3D diffusion model was developed considering the 3D nature of PET and CT images acquired. During the reverse process, the model utilized a 3D UNet structure and took the concatenation of 3D PET, CT, and Gaussian noise volumes as the network input to generate the tumor mask. Experiments based on the HECKTOR challenge dataset were conducted to evaluate the effectiveness of the proposed diffusion model. Several state-of-the-art techniques based on U-Net and Transformer structures were adopted as the reference methods. Benefits of employing both PET and CT as the network input, as well as further extending the diffusion model from 2D to 3D, were investigated based on various quantitative metrics and qualitative results.Main results.Results showed that the proposed 3D diffusion model could generate more accurate segmentation results compared with other methods (mean Dice of 0.739 compared to less than 0.726 for other methods). Compared to the diffusion model in 2D form, the proposed 3D model yielded superior results (mean Dice of 0.739 compared to 0.669). Our experiments also highlighted the advantage of utilizing dual-modality PET and CT data over only single-modality data for H&N tumor segmentation (with mean Dice less than 0.570).Significance.This work demonstrated the effectiveness of the proposed 3D diffusion model in generating more accurate H&N tumor segmentation masks compared to the other reference methods.


Assuntos
Fluordesoxiglucose F18 , Neoplasias de Cabeça e Pescoço , Imageamento Tridimensional , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Humanos , Imageamento Tridimensional/métodos , Difusão
8.
ArXiv ; 2024 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-38351928

RESUMO

Head and neck (H&N) cancers are among the most prevalent types of cancer worldwide, and [18F]F-FDG PET/CT is widely used for H&N cancer management. Recently, the diffusion model has demonstrated remarkable performance in various image-generation tasks. In this work, we proposed a 3D diffusion model to accurately perform H&N tumor segmentation from 3D PET and CT volumes. The 3D diffusion model was developed considering the 3D nature of PET and CT images acquired. During the reverse process, the model utilized a 3D UNet structure and took the concatenation of PET, CT, and Gaussian noise volumes as the network input to generate the tumor mask. Experiments based on the HECKTOR challenge dataset were conducted to evaluate the effectiveness of the proposed diffusion model. Several state-of-the-art techniques based on U-Net and Transformer structures were adopted as the reference methods. Benefits of employing both PET and CT as the network input as well as further extending the diffusion model from 2D to 3D were investigated based on various quantitative metrics and the uncertainty maps generated. Results showed that the proposed 3D diffusion model could generate more accurate segmentation results compared with other methods. Compared to the diffusion model in 2D format, the proposed 3D model yielded superior results. Our experiments also highlighted the advantage of utilizing dual-modality PET and CT data over only single-modality data for H&N tumor segmentation.

9.
Med Phys ; 51(3): 2096-2107, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37776263

RESUMO

BACKGROUND: Radiotherapy (RT) combined with cetuximab is the standard treatment for patients with inoperable head and neck cancers. Segmentation of head and neck (H&N) tumors is a prerequisite for radiotherapy planning but a time-consuming process. In recent years, deep convolutional neural networks (DCNN) have become the de facto standard for automated image segmentation. However, due to the expensive computational cost associated with enlarging the field of view in DCNNs, their ability to model long-range dependency is still limited, and this can result in sub-optimal segmentation performance for objects with background context spanning over long distances. On the other hand, Transformer models have demonstrated excellent capabilities in capturing such long-range information in several semantic segmentation tasks performed on medical images. PURPOSE: Despite the impressive representation capacity of vision transformer models, current vision transformer-based segmentation models still suffer from inconsistent and incorrect dense predictions when fed with multi-modal input data. We suspect that the power of their self-attention mechanism may be limited in extracting the complementary information that exists in multi-modal data. To this end, we propose a novel segmentation model, debuted, Cross-modal Swin Transformer (SwinCross), with cross-modal attention (CMA) module to incorporate cross-modal feature extraction at multiple resolutions. METHODS: We propose a novel architecture for cross-modal 3D semantic segmentation with two main components: (1) a cross-modal 3D Swin Transformer for integrating information from multiple modalities (PET and CT), and (2) a cross-modal shifted window attention block for learning complementary information from the modalities. To evaluate the efficacy of our approach, we conducted experiments and ablation studies on the HECKTOR 2021 challenge dataset. We compared our method against nnU-Net (the backbone of the top-5 methods in HECKTOR 2021) and other state-of-the-art transformer-based models, including UNETR and Swin UNETR. The experiments employed a five-fold cross-validation setup using PET and CT images. RESULTS: Empirical evidence demonstrates that our proposed method consistently outperforms the comparative techniques. This success can be attributed to the CMA module's capacity to enhance inter-modality feature representations between PET and CT during head-and-neck tumor segmentation. Notably, SwinCross consistently surpasses Swin UNETR across all five folds, showcasing its proficiency in learning multi-modal feature representations at varying resolutions through the cross-modal attention modules. CONCLUSIONS: We introduced a cross-modal Swin Transformer for automating the delineation of head and neck tumors in PET and CT images. Our model incorporates a cross-modality attention module, enabling the exchange of features between modalities at multiple resolutions. The experimental results establish the superiority of our method in capturing improved inter-modality correlations between PET and CT for head-and-neck tumor segmentation. Furthermore, the proposed methodology holds applicability to other semantic segmentation tasks involving different imaging modalities like SPECT/CT or PET/MRI. Code:https://github.com/yli192/SwinCross_CrossModalSwinTransformer_for_Medical_Image_Segmentation.


Assuntos
Neoplasias de Cabeça e Pescoço , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Humanos , Tomografia por Emissão de Pósitrons , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Aprendizagem , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador
10.
IEEE Trans Med Imaging ; 43(6): 2098-2112, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38241121

RESUMO

To address the lack of high-quality training labels in positron emission tomography (PET) imaging, weakly-supervised reconstruction methods that generate network-based mappings between prior images and noisy targets have been developed. However, the learned model has an intrinsic variance proportional to the average variance of the target image. To suppress noise and improve the accuracy and generalizability of the learned model, we propose a conditional weakly-supervised multi-task learning (MTL) strategy, in which an auxiliary task is introduced serving as an anatomical regularizer for the PET reconstruction main task. In the proposed MTL approach, we devise a novel multi-channel self-attention (MCSA) module that helps learn an optimal combination of shared and task-specific features by capturing both local and global channel-spatial dependencies. The proposed reconstruction method was evaluated on NEMA phantom PET datasets acquired at different positions in a PET/CT scanner and 26 clinical whole-body PET datasets. The phantom results demonstrate that our method outperforms state-of-the-art learning-free and weakly-supervised approaches obtaining the best noise/contrast tradeoff with a significant noise reduction of approximately 50.0% relative to the maximum likelihood (ML) reconstruction. The patient study results demonstrate that our method achieves the largest noise reductions of 67.3% and 35.5% in the liver and lung, respectively, as well as consistently small biases in 8 tumors with various volumes and intensities. In addition, network visualization reveals that adding the auxiliary task introduces more anatomical information into PET reconstruction than adding only the anatomical loss, and the developed MCSA can abstract features and retain PET image details.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Tomografia por Emissão de Pósitrons , Aprendizado de Máquina Supervisionado , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Imagem Corporal Total/métodos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos
11.
ArXiv ; 2024 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-38313194

RESUMO

Low-dose emission tomography (ET) plays a crucial role in medical imaging, enabling the acquisition of functional information for various biological processes while minimizing the patient dose. However, the inherent randomness in the photon counting process is a source of noise which is amplified in low-dose ET. This review article provides an overview of existing post-processing techniques, with an emphasis on deep neural network (NN) approaches. Furthermore, we explore future directions in the field of NN-based low-dose ET. This comprehensive examination sheds light on the potential of deep learning in enhancing the quality and resolution of low-dose ET images, ultimately advancing the field of medical imaging.

12.
bioRxiv ; 2024 May 17.
Artigo em Inglês | MEDLINE | ID: mdl-38712041

RESUMO

Spinal cord injuries (SCI) often lead to lifelong disability. Among the various types of injuries, incomplete and discomplete injuries, where some axons remain intact, offer potential for recovery. However, demyelination of these spared axons can worsen disability. Demyelination is a reversible phenomenon, and drugs like 4-aminopyridine (4AP), which target K+ channels in demyelinated axons, show that conduction can be restored. Yet, accurately assessing and monitoring demyelination post-SCI remains challenging due to the lack of suitable imaging methods. In this study, we introduce a novel approach utilizing the positron emission tomography (PET) tracer, [ 18 F]3F4AP, specifically targeting K+ channels in demyelinated axons for SCI imaging. Rats with incomplete contusion injuries were imaged up to one month post-injury, revealing [ 18 F]3F4AP's exceptional sensitivity to injury and its ability to detect temporal changes. Further validation through autoradiography and immunohistochemistry confirmed [ 18 F]3F4AP's targeting of demyelinated axons. In a proof-of-concept study involving human subjects, [ 18 F]3F4AP differentiated between a severe and a largely recovered incomplete injury, indicating axonal loss and demyelination, respectively. Moreover, alterations in tracer delivery were evident on dynamic PET images, suggestive of differences in spinal cord blood flow between the injuries. In conclusion, [ 18 F]3F4AP demonstrates efficacy in detecting incomplete SCI in both animal models and humans. The potential for monitoring post-SCI demyelination changes and response to therapy underscores the utility of [ 18 F]3F4AP in advancing our understanding and management of spinal cord injuries.

13.
Clin Infect Dis ; 56(5): 659-65, 2013 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-23132172

RESUMO

BACKGROUND: Rapid point-of-care (POC) syphilis tests based on simultaneous detection of treponemal and nontreponemal antibodies (dual POC tests) offer the opportunity to increase coverage of syphilis screening and treatment. This study aimed to conduct a multisite performance evaluation of a dual POC syphilis test in China. METHODS: Participants were recruited from patients at sexually transmitted infection clinics and high-risk groups in outreach settings in 6 sites in China. Three kinds of specimens (whole blood [WB], fingerprick blood [FB], and blood plasma [BP]) were used for evaluating sensitivity and specificity of the Dual Path Platform (DPP) Syphilis Screen and Confirm test using its treponemal and nontreponemal lines to compare Treponema pallidum particle agglutination (TPPA) assay and toluidine red unheated serum test (TRUST) as reference standards. RESULTS: A total of 3134 specimens (WB 1323, FB 488, and BP 1323) from 1323 individuals were collected. The sensitivities as compared with TPPA were 96.7% for WB, 96.4% for FB, and 94.6% for BP, and the specificities were 99.3%, 99.1%, and 99.6%, respectively. The sensitivities as compared with TRUST were 87.2% for WB, 85.8% for FB, and 88.4% for BP, and the specificities were 94.4%, 96.1%, and 95.0%, respectively. For specimens with a TRUST titer of 1:4 or higher, the sensitivities were 100.0% for WB, 97.8% for FB, and 99.6% for BP. CONCLUSIONS: DPP test shows good sensitivity and specificity in detecting treponemal and nontreponemal antibodies in 3 kinds of specimens. It is hoped that this assay can be considered as an alternative in the diagnosis of syphilis, particularly in resource-limited areas.


Assuntos
Anticorpos Antibacterianos/sangue , Sistemas Automatizados de Assistência Junto ao Leito , Sorodiagnóstico da Sífilis/métodos , Sífilis/diagnóstico , Treponema pallidum/imunologia , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Testes de Aglutinação , China , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Sensibilidade e Especificidade , Sífilis/imunologia , Adulto Jovem
14.
IEEE Trans Med Imaging ; 42(3): 785-796, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36288234

RESUMO

Image reconstruction of low-count positron emission tomography (PET) data is challenging. Kernel methods address the challenge by incorporating image prior information in the forward model of iterative PET image reconstruction. The kernelized expectation-maximization (KEM) algorithm has been developed and demonstrated to be effective and easy to implement. A common approach for a further improvement of the kernel method would be adding an explicit regularization, which however leads to a complex optimization problem. In this paper, we propose an implicit regularization for the kernel method by using a deep coefficient prior, which represents the kernel coefficient image in the PET forward model using a convolutional neural-network. To solve the maximum-likelihood neural network-based reconstruction problem, we apply the principle of optimization transfer to derive a neural KEM algorithm. Each iteration of the algorithm consists of two separate steps: a KEM step for image update from the projection data and a deep-learning step in the image domain for updating the kernel coefficient image using the neural network. This optimization algorithm is guaranteed to monotonically increase the data likelihood. The results from computer simulations and real patient data have demonstrated that the neural KEM can outperform existing KEM and deep image prior methods.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Simulação por Computador , Redes Neurais de Computação , Algoritmos
15.
IEEE Trans Med Imaging ; PP2023 Nov 23.
Artigo em Inglês | MEDLINE | ID: mdl-37995174

RESUMO

Position emission tomography (PET) is widely used in clinics and research due to its quantitative merits and high sensitivity, but suffers from low signal-to-noise ratio (SNR). Recently convolutional neural networks (CNNs) have been widely used to improve PET image quality. Though successful and efficient in local feature extraction, CNN cannot capture long-range dependencies well due to its limited receptive field. Global multi-head self-attention (MSA) is a popular approach to capture long-range information. However, the calculation of global MSA for 3D images has high computational costs. In this work, we proposed an efficient spatial and channel-wise encoder-decoder transformer, Spach Transformer, that can leverage spatial and channel information based on local and global MSAs. Experiments based on datasets of different PET tracers, i.e., 18F-FDG, 18F-ACBC, 18F-DCFPyL, and 68Ga-DOTATATE, were conducted to evaluate the proposed framework. Quantitative results show that the proposed Spach Transformer framework outperforms state-of-the-art deep learning architectures.

16.
Phys Med Biol ; 68(10)2023 05 15.
Artigo em Inglês | MEDLINE | ID: mdl-37116511

RESUMO

Objective. Positron emission tomography (PET) imaging of tau deposition using [18F]-MK6240 often involves long acquisitions in older subjects, many of whom exhibit dementia symptoms. The resulting unavoidable head motion can greatly degrade image quality. Motion increases the variability of PET quantitation for longitudinal studies across subjects, resulting in larger sample sizes in clinical trials of Alzheimer's disease (AD) treatment.Approach. After using an ultra-short frame-by-frame motion detection method based on the list-mode data, we applied an event-by-event list-mode reconstruction to generate the motion-corrected images from 139 scans acquired in 65 subjects. This approach was initially validated in two phantoms experiments against optical tracking data. We developed a motion metric based on the average voxel displacement in the brain to quantify the level of motion in each scan and consequently evaluate the effect of motion correction on images from studies with substantial motion. We estimated the rate of tau accumulation in longitudinal studies (51 subjects) by calculating the difference in the ratio of standard uptake values in key brain regions for AD. We compared the regions' standard deviations across subjects from motion and non-motion-corrected images.Main results. Individually, 14% of the scans exhibited notable motion quantified by the proposed motion metric, affecting 48% of the longitudinal datasets with three time points and 25% of all subjects. Motion correction decreased the blurring in images from scans with notable motion and improved the accuracy in quantitative measures. Motion correction reduced the standard deviation of the rate of tau accumulation by -49%, -24%, -18%, and -16% in the entorhinal, inferior temporal, precuneus, and amygdala regions, respectively.Significance. The list-mode-based motion correction method is capable of correcting both fast and slow motion during brain PET scans. It leads to improved brain PET quantitation, which is crucial for imaging AD.


Assuntos
Doença de Alzheimer , Processamento de Imagem Assistida por Computador , Humanos , Idoso , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Movimento (Física) , Doença de Alzheimer/diagnóstico por imagem , Encéfalo/diagnóstico por imagem
17.
IEEE Trans Med Imaging ; 41(3): 680-689, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34652998

RESUMO

Direct reconstruction methods have been developed to estimate parametric images directly from the measured PET sinograms by combining the PET imaging model and tracer kinetics in an integrated framework. Due to limited counts received, signal-to-noise-ratio (SNR) and resolution of parametric images produced by direct reconstruction frameworks are still limited. Recently supervised deep learning methods have been successfully applied to medical imaging denoising/reconstruction when large number of high-quality training labels are available. For static PET imaging, high-quality training labels can be acquired by extending the scanning time. However, this is not feasible for dynamic PET imaging, where the scanning time is already long enough. In this work, we proposed an unsupervised deep learning framework for direct parametric reconstruction from dynamic PET, which was tested on the Patlak model and the relative equilibrium Logan model. The training objective function was based on the PET statistical model. The patient's anatomical prior image, which is readily available from PET/CT or PET/MR scans, was supplied as the network input to provide a manifold constraint, and also utilized to construct a kernel layer to perform non-local feature denoising. The linear kinetic model was embedded in the network structure as a 1 ×1 ×1 convolution layer. Evaluations based on dynamic datasets of 18F-FDG and 11C-PiB tracers show that the proposed framework can outperform the traditional and the kernel method-based direct reconstruction methods.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Algoritmos , Fluordesoxiglucose F18 , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Razão Sinal-Ruído
18.
Med Phys ; 49(4): 2373-2385, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35048390

RESUMO

PURPOSE: Arterial spin labeling (ASL) magnetic resonance imaging (MRI) is an advanced noninvasive imaging technology that can measure cerebral blood flow (CBF) quantitatively without a contrast agent injection or radiation exposure. However, because of the weak labeling, conventional ASL images usually suffer from low signal-to-noise ratio (SNR), poor spatial resolution, and long acquisition time. Therefore, a method that can simultaneously improve the spatial resolution and SNR is needed. METHODS: In this work, we proposed an unsupervised superresolution (SR) method to improve ASL image resolution based on a pyramid of generative adversarial networks (GAN). Through layer-by-layer training, the generators can learn features from the coarsest to the finest. The last layer's generator that contains fine details and textures was used to generate the final SR ASL images. In our proposed framework, the corresponding T1-weighted MR image was supplied as a second-channel input of the generators to provide high-resolution prior information. In addition, a low-pass-filter loss term was included to suppress the noise of the original ASL images. To evaluate the performance of the proposed framework, a simulation study and two real-patient experiments based on the in vivo datasets obtained from three healthy subjects on a 3T MR scanner were conducted, regarding the low-resolution (LR) to normal-resolution (NR) and the NR-to-SR tasks. The proposed method was compared to the nearest neighbor interpolation, trilinear interpolation, third-order B-splines interpolation methods, and deep image prior (DIP) with the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) as the quantification metrics. The averaged ASL images acquired with 44 min acquisition time were used as the ground truth for real-patient LR-to-NR study. The ablation studies of low-pass-filter loss term and T1-weighted MR image were performed based on simulation data. RESULTS: For the simulation study, results show that the proposed method achieved significantly higher PSNR ( p $p$ -value < $<$ 0.05) and SSIM ( p $p$ -value < $<$ 0.05) than the nearest neighbor interpolation, trilinear interpolation, third-order B-splines interpolation, and DIP methods. For the real-patient LR-to-NR experiment, results show that the proposed method can generate high-quality SR ASL images with clearer structure boundaries and low noise levels and has the highest mean PSNR and SSIM. For real-patient NR-to-SR tasks, the structure of the results using the proposed method is sharper and clearer, which are the most similar to the structure of the reference 44 min acquisition image than other methods. The proposed method also shows the ability to remove artifacts in the NR image while superresolution. The ablation study verified that the low-pass-filter loss term and T1-weighted MR image are necessary for the proposed method. CONCLUSIONS: The proposed unsupervised multiscale GAN framework can simultaneously improve spatial resolution and reduce image noise. Experiment results from simulation data and three healthy subjects show that the proposed method achieves better performance than the nearest neighbor interpolation, the trilinear interpolation, the third-order B-splines interpolation, and DIP methods.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Artefatos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Razão Sinal-Ruído , Marcadores de Spin
19.
IEEE Trans Biomed Eng ; 69(1): 4-14, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-33284746

RESUMO

Positron emission tomography (PET) is widely used for clinical diagnosis. As PET suffers from low resolution and high noise, numerous efforts try to incorporate anatomical priors into PET image reconstruction, especially with the development of hybrid PET/CT and PET/MRI systems. In this work, we proposed a cube-based 3D structural convolutional sparse coding (CSC) concept for penalized-likelihood PET image reconstruction, named 3D PET-CSC. The proposed 3D PET-CSC takes advantage of the convolutional operation and manages to incorporate anatomical priors without the need of registration or supervised training. As 3D PET-CSC codes the whole 3D PET image, instead of patches, it alleviates the staircase artifacts commonly presented in traditional patch-based sparse coding methods. Compared with traditional coding methods in Fourier domain, the proposed method extends the 3D CSC to a straightforward approach based on the pursuit of localized cubes. Moreover, we developed the residual-image and order-subset mechanisms to further reduce the computational cost and accelerate the convergence for the proposed 3D PET-CSC method. Experiments based on computer simulations and clinical datasets demonstrate the superiority of 3D PET-CSC compared with other reference methods.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Algoritmos , Artefatos , Imageamento Tridimensional , Imagens de Fantasmas , Tomografia por Emissão de Pósitrons
20.
Med Image Anal ; 80: 102519, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35767910

RESUMO

Recently, deep learning-based denoising methods have been gradually used for PET images denoising and have shown great achievements. Among these methods, one interesting framework is conditional deep image prior (CDIP) which is an unsupervised method that does not need prior training or a large number of training pairs. In this work, we combined CDIP with Logan parametric image estimation to generate high-quality parametric images. In our method, the kinetic model is the Logan reference tissue model that can avoid arterial sampling. The neural network was utilized to represent the images of Logan slope and intercept. The patient's computed tomography (CT) image or magnetic resonance (MR) image was used as the network input to provide anatomical information. The optimization function was constructed and solved by the alternating direction method of multipliers (ADMM) algorithm. Both simulation and clinical patient datasets demonstrated that the proposed method could generate parametric images with more detailed structures. Quantification results showed that the proposed method results had higher contrast-to-noise (CNR) improvement ratios (PET/CT datasets: 62.25%±29.93%; striatum of brain PET datasets : 129.51%±32.13%, thalamus of brain PET datasets: 128.24%±31.18%) than Gaussian filtered results (PET/CT datasets: 23.33%±18.63%; striatum of brain PET datasets: 74.71%±8.71%, thalamus of brain PET datasets: 73.02%±9.34%) and nonlocal mean (NLM) denoised results (PET/CT datasets: 37.55%±26.56%; striatum of brain PET datasets: 100.89%±16.13%, thalamus of brain PET datasets: 103.59%±16.37%).


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Algoritmos , Simulação por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Tomografia por Emissão de Pósitrons/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA