Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Nucl Cardiol ; 30(5): 1859-1878, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-35680755

RESUMO

Attenuation correction (AC) is essential for quantitative analysis and clinical diagnosis of single-photon emission computed tomography (SPECT) and positron emission tomography (PET). In clinical practice, computed tomography (CT) is utilized to generate attenuation maps (µ-maps) for AC of hybrid SPECT/CT and PET/CT scanners. However, CT-based AC methods frequently produce artifacts due to CT artifacts and misregistration of SPECT-CT and PET-CT scans. Segmentation-based AC methods using magnetic resonance imaging (MRI) for PET/MRI scanners are inaccurate and complicated since MRI does not contain direct information of photon attenuation. Computational AC methods for SPECT and PET estimate attenuation coefficients directly from raw emission data, but suffer from low accuracy, cross-talk artifacts, high computational complexity, and high noise level. The recently evolving deep-learning-based methods have shown promising results in AC of SPECT and PET, which can be generally divided into two categories: indirect and direct strategies. Indirect AC strategies apply neural networks to transform emission, transmission, or MR images into synthetic µ-maps or CT images which are then incorporated into AC reconstruction. Direct AC strategies skip the intermediate steps of generating µ-maps or CT images and predict AC SPECT or PET images from non-attenuation-correction (NAC) SPECT or PET images directly. These deep-learning-based AC methods show comparable and even superior performance to non-deep-learning methods. In this article, we first discussed the principles and limitations of non-deep-learning AC methods, and then reviewed the status and prospects of deep-learning-based methods for AC of SPECT and PET.


Assuntos
Aprendizado Profundo , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Tomografia Computadorizada de Emissão de Fóton Único , Imageamento por Ressonância Magnética/métodos
2.
J Nucl Cardiol ; 30(1): 86-100, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-35508796

RESUMO

BACKGROUND: The GE Discovery NM (DNM) 530c/570c are dedicated cardiac SPECT scanners with 19 detector modules designed for stationary imaging. This study aims to incorporate additional projection angular sampling to improve reconstruction quality. A deep learning method is also proposed to generate synthetic dense-view image volumes from few-view counterparts. METHODS: By moving the detector array, a total of four projection angle sets were acquired and combined for image reconstructions. A deep neural network is proposed to generate synthetic four-angle images with 76 ([Formula: see text]) projections from corresponding one-angle images with 19 projections. Simulated data, pig, physical phantom, and human studies were used for network training and evaluation. Reconstruction results were quantitatively evaluated using representative image metrics. The myocardial perfusion defect size of different subjects was quantified using an FDA-cleared clinical software. RESULTS: Multi-angle reconstructions and network results have higher image resolution, improved uniformity on normal myocardium, more accurate defect quantification, and superior quantitative values on all the testing data. As validated against cardiac catheterization and diagnostic results, deep learning results showed improved image quality with better defect contrast on human studies. CONCLUSION: Increasing angular sampling can substantially improve image quality on DNM, and deep learning can be implemented to improve reconstruction quality in case of stationary imaging.


Assuntos
Aprendizado Profundo , Humanos , Animais , Suínos , Tomografia Computadorizada de Emissão de Fóton Único/métodos , Tomografia Computadorizada por Raios X/métodos , Imagens de Fantasmas , Processamento de Imagem Assistida por Computador/métodos
3.
Eur J Nucl Med Mol Imaging ; 49(9): 3046-3060, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35169887

RESUMO

PURPOSE: Deep-learning-based attenuation correction (AC) for SPECT includes both indirect and direct approaches. Indirect approaches generate attenuation maps (µ-maps) from emission images, while direct approaches predict AC images directly from non-attenuation-corrected (NAC) images without µ-maps. For dedicated cardiac SPECT scanners with CZT detectors, indirect approaches are challenging due to the limited field-of-view (FOV). In this work, we aim to 1) first develop novel indirect approaches to improve the AC performance for dedicated SPECT; and 2) compare the AC performance between direct and indirect approaches for both general purpose and dedicated SPECT. METHODS: For dedicated SPECT, we developed strategies to predict truncated µ-maps from NAC images reconstructed with a small matrix, or full µ-maps from NAC images reconstructed with a large matrix using 270 anonymized clinical studies scanned on a GE Discovery NM/CT 570c SPECT/CT. For general purpose SPECT, we implemented direct and indirect approaches using 400 anonymized clinical studies scanned on a GE NM/CT 850c SPECT/CT. NAC images in both photopeak and scatter windows were input to predict µ-maps or AC images. RESULTS: For dedicated SPECT, the averaged normalized mean square error (NMSE) using our proposed strategies with full µ-maps was 1.20 ± 0.72% as compared to 2.21 ± 1.17% using the previous direct approaches. The polar map absolute percent error (APE) using our approaches was 3.24 ± 2.79% (R2 = 0.9499) as compared to 4.77 ± 3.96% (R2 = 0.9213) using direct approaches. For general purpose SPECT, the averaged NMSE of the predicted AC images using the direct approaches was 2.57 ± 1.06% as compared to 1.37 ± 1.16% using the indirect approaches. CONCLUSIONS: We developed strategies of generating µ-maps for dedicated cardiac SPECT with small FOV. For both general purpose and dedicated SPECT, indirect approaches showed superior performance of AC than direct approaches.


Assuntos
Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada de Emissão de Fóton Único/métodos
4.
J Nucl Cardiol ; 29(6): 3379-3391, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-35474443

RESUMO

It has been proved feasible to generate attenuation maps (µ-maps) from cardiac SPECT using deep learning. However, this assumed that the training and testing datasets were acquired using the same scanner, tracer, and protocol. We investigated a robust generation of CT-derived µ-maps from cardiac SPECT acquired by different scanners, tracers, and protocols from the training data. We first pre-trained a network using 120 studies injected with 99mTc-tetrofosmin acquired from a GE 850 SPECT/CT with 360-degree gantry rotation, which was then fine-tuned and tested using 80 studies injected with 99mTc-sestamibi acquired from a Philips BrightView SPECT/CT with 180-degree gantry rotation. The error between ground-truth and predicted µ-maps by transfer learning was 5.13 ± 7.02%, as compared to 8.24 ± 5.01% by direct transition without fine-tuning and 6.45 ± 5.75% by limited-sample training. The error between ground-truth and reconstructed images with predicted µ-maps by transfer learning was 1.11 ± 1.57%, as compared to 1.72 ± 1.63% by direct transition and 1.68 ± 1.21% by limited-sample training. It is feasible to apply a network pre-trained by a large amount of data from one scanner to data acquired by another scanner using different tracers and protocols, with proper transfer learning.


Assuntos
Compostos Radiofarmacêuticos , Tecnécio Tc 99m Sestamibi , Humanos , Tomografia Computadorizada com Tomografia Computadorizada de Emissão de Fóton Único , Aprendizado de Máquina , Tomografia Computadorizada de Emissão de Fóton Único/métodos
5.
J Nucl Cardiol ; 29(5): 2235-2250, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34085168

RESUMO

BACKGROUND: Attenuation correction (AC) using CT transmission scanning enables the accurate quantitative analysis of dedicated cardiac SPECT. However, AC is challenging for SPECT-only scanners. We developed a deep learning-based approach to generate synthetic AC images from SPECT images without AC. METHODS: CT-free AC was implemented using our customized Dual Squeeze-and-Excitation Residual Dense Network (DuRDN). 172 anonymized clinical hybrid SPECT/CT stress/rest myocardial perfusion studies were used in training, validation, and testing. Additional body mass index (BMI), gender, and scatter-window information were encoded as channel-wise input to further improve the network performance. RESULTS: Quantitative and qualitative analysis based on image voxels and 17-segment polar map showed the potential of our approach to generate consistent SPECT AC images. Our customized DuRDN showed superior performance to conventional network design such as U-Net. The averaged voxel-wise normalized mean square error (NMSE) between the predicted AC images by DuRDN and the ground-truth AC images was 2.01 ± 1.01%, as compared to 2.23 ± 1.20% by U-Net. CONCLUSIONS: Our customized DuRDN facilitates dedicated cardiac SPECT AC without CT scanning. DuRDN can efficiently incorporate additional patient information and may achieve better performance compared to conventional U-Net.


Assuntos
Tomografia Computadorizada de Emissão de Fóton Único , Tomografia Computadorizada por Raios X , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada com Tomografia Computadorizada de Emissão de Fóton Único , Tomografia Computadorizada de Emissão de Fóton Único/métodos
6.
Opt Express ; 29(20): 31754-31766, 2021 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-34615262

RESUMO

We demonstrate an adaptive super-resolution based contact imaging on a CMOS chip to achieve subcellular spatial resolution over a large field of view of ∼24 mm2. By using regular LED illumination, we acquire the single lower-resolution image of the objects placed approximate to the sensor with unit magnification. For the raw contact-mode lens-free image, the pixel size of the sensor chip limits the spatial resolution. We develop a hybrid supervised-unsupervised strategy to train a super-resolution network, circumventing the missing of in-situ ground truth, effectively recovering a much higher resolution image of the objects, permitting sub-micron spatial resolution to be achieved across the entire sensor chip active area. We demonstrate the success of this approach by imaging the proliferation dynamics of cells directly cultured on the chip.


Assuntos
Células Endoteliais da Veia Umbilical Humana , Aumento da Imagem/métodos , Espaço Intracelular/diagnóstico por imagem , Iluminação/métodos , Microscopia/métodos , Algoritmos , Técnicas de Cultura de Células , Proliferação de Células , Humanos , Aumento da Imagem/instrumentação , Lentes , Microscopia/instrumentação , Redes Neurais de Computação
7.
IEEE Trans Med Imaging ; PP2024 Apr 05.
Artigo em Inglês | MEDLINE | ID: mdl-38578853

RESUMO

Single-Photon Emission Computed Tomography (SPECT) is widely applied for the diagnosis of coronary artery diseases. Low-dose (LD) SPECT aims to minimize radiation exposure but leads to increased image noise. Limited-view (LV) SPECT, such as the latest GE MyoSPECT ES system, enables accelerated scanning and reduces hardware expenses but degrades reconstruction accuracy. Additionally, Computed Tomography (CT) is commonly used to derive attenuation maps (µ-maps) for attenuation correction (AC) of cardiac SPECT, but it will introduce additional radiation exposure and SPECT-CT misalignments. Although various methods have been developed to solely focus on LD denoising, LV reconstruction, or CT-free AC in SPECT, the solution for simultaneously addressing these tasks remains challenging and under-explored. Furthermore, it is essential to explore the potential of fusing cross-domain and cross-modality information across these interrelated tasks to further enhance the accuracy of each task. Thus, we propose a Dual-Domain Coarse-to-Fine Progressive Network (DuDoCFNet), a multi-task learning method for simultaneous LD denoising, LV reconstruction, and CT-free µ-map generation of cardiac SPECT. Paired dual-domain networks in DuDoCFNet are cascaded using a multi-layer fusion mechanism for cross-domain and cross-modality feature fusion. Two-stage progressive learning strategies are applied in both projection and image domains to achieve coarse-to-fine estimations of SPECT projections and CT-derived µ-maps. Our experiments demonstrate DuDoCFNet's superior accuracy in estimating projections, generating µ-maps, and AC reconstructions compared to existing single- or multi-task learning methods, under various iterations and LD levels. The source code of this work is available at https://github.com/XiongchaoChen/DuDoCFNet-MultiTask.

8.
Med Image Anal ; 96: 103190, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38820677

RESUMO

Inter-frame motion in dynamic cardiac positron emission tomography (PET) using rubidium-82 (82Rb) myocardial perfusion imaging impacts myocardial blood flow (MBF) quantification and the diagnosis accuracy of coronary artery diseases. However, the high cross-frame distribution variation due to rapid tracer kinetics poses a considerable challenge for inter-frame motion correction, especially for early frames where intensity-based image registration techniques often fail. To address this issue, we propose a novel method called Temporally and Anatomically Informed Generative Adversarial Network (TAI-GAN) that utilizes an all-to-one mapping to convert early frames into those with tracer distribution similar to the last reference frame. The TAI-GAN consists of a feature-wise linear modulation layer that encodes channel-wise parameters generated from temporal information and rough cardiac segmentation masks with local shifts that serve as anatomical information. Our proposed method was evaluated on a clinical 82Rb PET dataset, and the results show that our TAI-GAN can produce converted early frames with high image quality, comparable to the real reference frames. After TAI-GAN conversion, the motion estimation accuracy and subsequent myocardial blood flow (MBF) quantification with both conventional and deep learning-based motion correction methods were improved compared to using the original frames. The code is available at https://github.com/gxq1998/TAI-GAN.


Assuntos
Imagem de Perfusão do Miocárdio , Tomografia por Emissão de Pósitrons , Radioisótopos de Rubídio , Humanos , Tomografia por Emissão de Pósitrons/métodos , Imagem de Perfusão do Miocárdio/métodos , Doença da Artéria Coronariana/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
9.
Med Image Anal ; 95: 103180, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38657423

RESUMO

The high noise level of dynamic Positron Emission Tomography (PET) images degrades the quality of parametric images. In this study, we aim to improve the quality and quantitative accuracy of Ki images by utilizing deep learning techniques to reduce the noise in dynamic PET images. We propose a novel denoising technique, Population-based Deep Image Prior (PDIP), which integrates population-based prior information into the optimization process of Deep Image Prior (DIP). Specifically, the population-based prior image is generated from a supervised denoising model that is trained on a prompts-matched static PET dataset comprising 100 clinical studies. The 3D U-Net architecture is employed for both the supervised model and the following DIP optimization process. We evaluated the efficacy of PDIP for noise reduction in 25%-count and 100%-count dynamic PET images from 23 patients by comparing with two other baseline techniques: the Prompts-matched Supervised model (PS) and a conditional DIP (CDIP) model that employs the mean static PET image as the prior. Both the PS and CDIP models show effective noise reduction but result in smoothing and removal of small lesions. In addition, the utilization of a single static image as the prior in the CDIP model also introduces a similar tracer distribution to the denoised dynamic frames, leading to lower Ki in general as well as incorrect Ki in the descending aorta. By contrast, as the proposed PDIP model utilizes intrinsic image features from the dynamic dataset and a large clinical static dataset, it not only achieves comparable noise reduction as the supervised and CDIP models but also improves lesion Ki predictions.


Assuntos
Aprendizado Profundo , Tomografia por Emissão de Pósitrons , Humanos , Tomografia por Emissão de Pósitrons/métodos , Razão Sinal-Ruído , Processamento de Imagem Assistida por Computador/métodos
10.
IEEE Trans Med Imaging ; PP2023 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-37368811

RESUMO

In whole-body dynamic positron emission tomography (PET), inter-frame subject motion causes spatial misalignment and affects parametric imaging. Many of the current deep learning inter-frame motion correction techniques focus solely on the anatomy-based registration problem, neglecting the tracer kinetics that contains functional information. To directly reduce the Patlak fitting error for 18F-FDG and further improve model performance, we propose an interframe motion correction framework with Patlak loss optimization integrated into the neural network (MCP-Net). The MCP-Net consists of a multiple-frame motion estimation block, an image-warping block, and an analytical Patlak block that estimates Patlak fitting using motion-corrected frames and the input function. A novel Patlak loss penalty component utilizing mean squared percentage fitting error is added to the loss function to reinforce the motion correction. The parametric images were generated using standard Patlak analysis following motion correction. Our framework enhanced the spatial alignment in both dynamic frames and parametric images and lowered normalized fitting error when compared to both conventional and deep learning benchmarks. MCP-Net also achieved the lowest motion prediction error and showed the best generalization capability. The potential of enhancing network performance and improving the quantitative accuracy of dynamic PET by directly utilizing tracer kinetics is suggested.

11.
IEEE Trans Radiat Plasma Med Sci ; 7(3): 284-295, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37789946

RESUMO

Positron emission tomography (PET) with a reduced injection dose, i.e., low-dose PET, is an efficient way to reduce radiation dose. However, low-dose PET reconstruction suffers from a low signal-to-noise ratio (SNR), affecting diagnosis and other PET-related applications. Recently, deep learning-based PET denoising methods have demonstrated superior performance in generating high-quality reconstruction. However, these methods require a large amount of representative data for training, which can be difficult to collect and share due to medical data privacy regulations. Moreover, low-dose PET data at different institutions may use different low-dose protocols, leading to non-identical data distribution. While previous federated learning (FL) algorithms enable multi-institution collaborative training without the need of aggregating local data, it is challenging for previous methods to address the large domain shift caused by different low-dose PET settings, and the application of FL to PET is still under-explored. In this work, we propose a federated transfer learning (FTL) framework for low-dose PET denoising using heterogeneous low-dose data. Our experimental results on simulated multi-institutional data demonstrate that our method can efficiently utilize heterogeneous low-dose data without compromising data privacy for achieving superior low-dose PET denoising performance for different institutions with different low-dose settings, as compared to previous FL methods.

12.
IEEE Trans Radiat Plasma Med Sci ; 7(5): 465-472, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37997577

RESUMO

FDG parametric Ki images show great advantage over static SUV images, due to the higher contrast and better accuracy in tracer uptake rate estimation. In this study, we explored the feasibility of generating synthetic Ki images from static SUV ratio (SUVR) images using three configurations of U-Nets with different sets of input and output image patches, which were the U-Nets with single input and single output (SISO), multiple inputs and single output (MISO), and single input and multiple outputs (SIMO). SUVR images were generated by averaging three 5-min dynamic SUV frames starting at 60 minutes post-injection, and then normalized by the mean SUV values in the blood pool. The corresponding ground truth Ki images were derived using Patlak graphical analysis with input functions from measurement of arterial blood samples. Even though the synthetic Ki values were not quantitatively accurate compared with ground truth, the linear regression analysis of joint histograms in the voxels of body regions showed that the mean R2 values were higher between U-Net prediction and ground truth (0.596, 0.580, 0.576 in SISO, MISO and SIMO), than that between SUVR and ground truth Ki (0.571). In terms of similarity metrics, the synthetic Ki images were closer to the ground truth Ki images (mean SSIM = 0.729, 0.704, 0.704 in SISO, MISO and MISO) than the input SUVR images (mean SSIM = 0.691). Therefore, it is feasible to use deep learning networks to estimate surrogate map of parametric Ki images from static SUVR images.

13.
Med Image Anal ; 88: 102840, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37216735

RESUMO

Myocardial perfusion imaging (MPI) using single-photon emission computed tomography (SPECT) is widely applied for the diagnosis of cardiovascular diseases. Attenuation maps (µ-maps) derived from computed tomography (CT) are utilized for attenuation correction (AC) to improve the diagnostic accuracy of cardiac SPECT. However, in clinical practice, SPECT and CT scans are acquired sequentially, potentially inducing misregistration between the two images and further producing AC artifacts. Conventional intensity-based registration methods show poor performance in the cross-modality registration of SPECT and CT-derived µ-maps since the two imaging modalities might present totally different intensity patterns. Deep learning has shown great potential in medical imaging registration. However, existing deep learning strategies for medical image registration encoded the input images by simply concatenating the feature maps of different convolutional layers, which might not fully extract or fuse the input information. In addition, deep-learning-based cross-modality registration of cardiac SPECT and CT-derived µ-maps has not been investigated before. In this paper, we propose a novel Dual-Channel Squeeze-Fusion-Excitation (DuSFE) co-attention module for the cross-modality rigid registration of cardiac SPECT and CT-derived µ-maps. DuSFE is designed based on the co-attention mechanism of two cross-connected input data streams. The channel-wise or spatial features of SPECT and µ-maps are jointly encoded, fused, and recalibrated in the DuSFE module. DuSFE can be flexibly embedded at multiple convolutional layers to enable gradual feature fusion in different spatial dimensions. Our studies using clinical patient MPI studies demonstrated that the DuSFE-embedded neural network generated significantly lower registration errors and more accurate AC SPECT images than existing methods. We also showed that the DuSFE-embedded network did not over-correct or degrade the registration performance of motion-free cases. The source code of this work is available at https://github.com/XiongchaoChen/DuSFE_CrossRegistration.


Assuntos
Tomografia Computadorizada de Emissão de Fóton Único , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Tomografia Computadorizada de Emissão de Fóton Único/métodos , Coração , Imagens de Fantasmas , Processamento de Imagem Assistida por Computador/métodos
14.
IEEE Trans Radiat Plasma Med Sci ; 7(8): 839-850, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-38745858

RESUMO

SPECT systems distinguish radionuclides by using multiple energy windows. For CZT detectors, the energy spectrum has a low energy tail leading to additional crosstalk between the radionuclides. Previous work developed models to correct the scatter and crosstalk for CZT-based dedicated cardiac systems with similar 99mTc/123I tracer distributions. These models estimate the primary and scatter components by solving a set of equations employing the MLEM approach. A penalty term is applied to ensure convergence. The present work estimates the penalty term for any 99mTc/123I activity level. An iterative approach incorporating Monte Carlo into the iterative image reconstruction loops was developed to estimate the penalty terms. We used SIMIND and XCAT phantoms in this study. Distribution of tracers in the myocardial tissue and blood pool were varied to simulate a dynamic acquisition. Evaluations of the estimated and the real penalty terms were performed using simulations and large animal data. The myocardium to blood pool ratio was calculated using ROIs in the myocardial tissue and the blood pool for quantitative analysis. All corrected images yielded a good agreement with the gold standard images. In conclusion, we developed a CZT crosstalk correction method for quantitative imaging of 99mTc/123I activity levels by dynamically estimating the penalty terms.

15.
IEEE Trans Med Imaging ; 42(5): 1325-1336, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36459599

RESUMO

In nuclear imaging, limited resolution causes partial volume effects (PVEs) that affect image sharpness and quantitative accuracy. Partial volume correction (PVC) methods incorporating high-resolution anatomical information from CT or MRI have been demonstrated to be effective. However, such anatomical-guided methods typically require tedious image registration and segmentation steps. Accurately segmented organ templates are also hard to obtain, particularly in cardiac SPECT imaging, due to the lack of hybrid SPECT/CT scanners with high-end CT and associated motion artifacts. Slight mis-registration/mis-segmentation would result in severe degradation in image quality after PVC. In this work, we develop a deep-learning-based method for fast cardiac SPECT PVC without anatomical information and associated organ segmentation. The proposed network involves a densely-connected multi-dimensional dynamic mechanism, allowing the convolutional kernels to be adapted based on the input images, even after the network is fully trained. Intramyocardial blood volume (IMBV) is introduced as an additional clinical-relevant loss function for network optimization. The proposed network demonstrated promising performance on 28 canine studies acquired on a GE Discovery NM/CT 570c dedicated cardiac SPECT scanner with a 64-slice CT using Technetium-99m-labeled red blood cells. This work showed that the proposed network with densely-connected dynamic mechanism produced superior results compared with the same network without such mechanism. Results also showed that the proposed network without anatomical information could produce images with statistically comparable IMBV measurements to the images generated by anatomical-guided PVC methods, which could be helpful in clinical translation.


Assuntos
Algoritmos , Tomografia Computadorizada de Emissão de Fóton Único , Animais , Cães , Artefatos , Técnicas de Imagem Cardíaca , Eritrócitos
16.
Med Phys ; 50(1): 89-103, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36048541

RESUMO

PURPOSE: Myocardial perfusion imaging (MPI) using single-photon emission-computed tomography (SPECT) is widely applied for the diagnosis of cardiovascular diseases. In clinical practice, the long scanning procedures and acquisition time might induce patient anxiety and discomfort, motion artifacts, and misalignments between SPECT and computed tomography (CT). Reducing the number of projection angles provides a solution that results in a shorter scanning time. However, fewer projection angles might cause lower reconstruction accuracy, higher noise level, and reconstruction artifacts due to reduced angular sampling. We developed a deep-learning-based approach for high-quality SPECT image reconstruction using sparsely sampled projections. METHODS: We proposed a novel deep-learning-based dual-domain sinogram synthesis (DuDoSS) method to recover full-view projections from sparsely sampled projections of cardiac SPECT. DuDoSS utilized the SPECT images predicted in the image domain as guidance to generate synthetic full-view projections in the sinogram domain. The synthetic projections were then reconstructed into non-attenuation-corrected and attenuation-corrected (AC) SPECT images for voxel-wise and segment-wise quantitative evaluations in terms of normalized mean square error (NMSE) and absolute percent error (APE). Previous deep-learning-based approaches, including direct sinogram generation (Direct Sino2Sino) and direct image prediction (Direct Img2Img), were tested in this study for comparison. The dataset used in this study included a total of 500 anonymized clinical stress-state MPI studies acquired on a GE NM/CT 850 scanner with 60 projection angles following the injection of 99m Tc-tetrofosmin. RESULTS: Our proposed DuDoSS generated more consistent synthetic projections and SPECT images with the ground truth than other approaches. The average voxel-wise NMSE between the synthetic projections by DuDoSS and the ground-truth full-view projections was 2.08% ± 0.81%, as compared to 2.21% ± 0.86% (p < 0.001) by Direct Sino2Sino. The averaged voxel-wise NMSE between the AC SPECT images by DuDoSS and the ground-truth AC SPECT images was 1.63% ± 0.72%, as compared to 1.84% ± 0.79% (p < 0.001) by Direct Sino2Sino and 1.90% ± 0.66% (p < 0.001) by Direct Img2Img. The averaged segment-wise APE between the AC SPECT images by DuDoSS and the ground-truth AC SPECT images was 3.87% ± 3.23%, as compared to 3.95% ± 3.21% (p = 0.023) by Direct Img2Img and 4.46% ± 3.58% (p < 0.001) by Direct Sino2Sino. CONCLUSIONS: Our proposed DuDoSS is feasible to generate accurate synthetic full-view projections from sparsely sampled projections for cardiac SPECT. The synthetic projections and reconstructed SPECT images generated from DuDoSS are more consistent with the ground-truth full-view projections and SPECT images than other approaches. DuDoSS can potentially enable fast data acquisition of cardiac SPECT.


Assuntos
Aprendizado Profundo , Hominidae , Humanos , Animais , Tomografia Computadorizada de Emissão de Fóton Único/métodos , Coração/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador/métodos
17.
Med Image Anal ; 90: 102993, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37827110

RESUMO

Low-count PET is an efficient way to reduce radiation exposure and acquisition time, but the reconstructed images often suffer from low signal-to-noise ratio (SNR), thus affecting diagnosis and other downstream tasks. Recent advances in deep learning have shown great potential in improving low-count PET image quality, but acquiring a large, centralized, and diverse dataset from multiple institutions for training a robust model is difficult due to privacy and security concerns of patient data. Moreover, low-count PET data at different institutions may have different data distribution, thus requiring personalized models. While previous federated learning (FL) algorithms enable multi-institution collaborative training without the need of aggregating local data, addressing the large domain shift in the application of multi-institutional low-count PET denoising remains a challenge and is still highly under-explored. In this work, we propose FedFTN, a personalized federated learning strategy that addresses these challenges. FedFTN uses a local deep feature transformation network (FTN) to modulate the feature outputs of a globally shared denoising network, enabling personalized low-count PET denoising for each institution. During the federated learning process, only the denoising network's weights are communicated and aggregated, while the FTN remains at the local institutions for feature transformation. We evaluated our method using a large-scale dataset of multi-institutional low-count PET imaging data from three medical centers located across three continents, and showed that FedFTN provides high-quality low-count PET images, outperforming previous baseline FL reconstruction methods across all low-count levels at all three institutions.


Assuntos
Algoritmos , Tomografia por Emissão de Pósitrons , Humanos , Processamento de Imagem Assistida por Computador , Razão Sinal-Ruído
18.
Simul Synth Med Imaging ; 14288: 64-74, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38464964

RESUMO

The rapid tracer kinetics of rubidium-82 (82Rb) and high variation of cross-frame distribution in dynamic cardiac positron emission tomography (PET) raise significant challenges for inter-frame motion correction, particularly for the early frames where conventional intensity-based image registration techniques are not applicable. Alternatively, a promising approach utilizes generative methods to handle the tracer distribution changes to assist existing registration methods. To improve frame-wise registration and parametric quantification, we propose a Temporally and Anatomically Informed Generative Adversarial Network (TAI-GAN) to transform the early frames into the late reference frame using an all-to-one mapping. Specifically, a feature-wise linear modulation layer encodes channel-wise parameters generated from temporal tracer kinetics information, and rough cardiac segmentations with local shifts serve as the anatomical information. We validated our proposed method on a clinical 82Rb PET dataset and found that our TAI-GAN can produce converted early frames with high image quality, comparable to the real reference frames. After TAI-GAN conversion, motion estimation accuracy and clinical myocardial blood flow (MBF) quantification were improved compared to using the original frames. Our code is published at https://github.com/gxq1998/TAI-GAN.

19.
Med Image Comput Comput Assist Interv ; 13434: 163-172, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38464686

RESUMO

Inter-frame patient motion introduces spatial misalignment and degrades parametric imaging in whole-body dynamic positron emission tomography (PET). Most current deep learning inter-frame motion correction works consider only the image registration problem, ignoring tracer kinetics. We propose an inter-frame Motion Correction framework with Patlak regularization (MCP-Net) to directly optimize the Patlak fitting error and further improve model performance. The MCP-Net contains three modules: a motion estimation module consisting of a multiple-frame 3-D U-Net with a convolutional long short-term memory layer combined at the bottleneck; an image warping module that performs spatial transformation; and an analytical Patlak module that estimates Patlak fitting with the motion-corrected frames and the individual input function. A Patlak loss penalization term using mean squared percentage fitting error is introduced to the loss function in addition to image similarity measurement and displacement gradient loss. Following motion correction, the parametric images were generated by standard Patlak analysis. Compared with both traditional and deep learning benchmarks, our network further corrected the residual spatial mismatch in the dynamic frames, improved the spatial alignment of Patlak Ki/Vb images, and reduced normalized fitting error. With the utilization of tracer dynamics and enhanced network performance, MCP-Net has the potential for further improving the quantitative accuracy of dynamic PET. Our code is released at https://github.com/gxq1998/MCP-Net.

20.
Med Image Anal ; 75: 102289, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34758443

RESUMO

Sparse-view computed tomography (SVCT) aims to reconstruct a cross-sectional image using a reduced number of x-ray projections. While SVCT can efficiently reduce the radiation dose, the reconstruction suffers from severe streak artifacts, and the artifacts are further amplified with the presence of metallic implants, which could adversely impact the medical diagnosis and other downstream applications. Previous methods have extensively explored either SVCT reconstruction without metallic implants, or full-view CT metal artifact reduction (MAR). The issue of simultaneous sparse-view and metal artifact reduction (SVMAR) remains under-explored, and it is infeasible to directly apply previous SVCT and MAR methods to SVMAR which may yield non-ideal reconstruction quality. In this work, we propose a dual-domain data consistent recurrent network, called DuDoDR-Net, for SVMAR. Our DuDoDR-Net aims to reconstruct an artifact-free image by recurrent image domain and sinogram domain restorations. To ensure the metal-free part of acquired projection data is preserved, we also develop the image data consistent layer (iDCL) and sinogram data consistent layer (sDCL) that are interleaved in our recurrent framework. Our experimental results demonstrate that our DuDoDR-Net is able to produce superior artifact-reduced results while preserving the anatomical structures, that outperforming previous SVCT and SVMAR methods, under different sparse-view acquisition settings.


Assuntos
Artefatos , Processamento de Imagem Assistida por Computador , Algoritmos , Humanos , Imagens de Fantasmas , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA