Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
Más filtros













Base de datos
Intervalo de año de publicación
1.
Med Image Anal ; 95: 103180, 2024 Apr 17.
Artículo en Inglés | MEDLINE | ID: mdl-38657423

RESUMEN

The high noise level of dynamic Positron Emission Tomography (PET) images degrades the quality of parametric images. In this study, we aim to improve the quality and quantitative accuracy of Ki images by utilizing deep learning techniques to reduce the noise in dynamic PET images. We propose a novel denoising technique, Population-based Deep Image Prior (PDIP), which integrates population-based prior information into the optimization process of Deep Image Prior (DIP). Specifically, the population-based prior image is generated from a supervised denoising model that is trained on a prompts-matched static PET dataset comprising 100 clinical studies. The 3D U-Net architecture is employed for both the supervised model and the following DIP optimization process. We evaluated the efficacy of PDIP for noise reduction in 25%-count and 100%-count dynamic PET images from 23 patients by comparing with two other baseline techniques: the Prompts-matched Supervised model (PS) and a conditional DIP (CDIP) model that employs the mean static PET image as the prior. Both the PS and CDIP models show effective noise reduction but result in smoothing and removal of small lesions. In addition, the utilization of a single static image as the prior in the CDIP model also introduces a similar tracer distribution to the denoised dynamic frames, leading to lower Ki in general as well as incorrect Ki in the descending aorta. By contrast, as the proposed PDIP model utilizes intrinsic image features from the dynamic dataset and a large clinical static dataset, it not only achieves comparable noise reduction as the supervised and CDIP models but also improves lesion Ki predictions.

2.
IEEE Trans Med Imaging ; PP2024 Apr 05.
Artículo en Inglés | MEDLINE | ID: mdl-38578853

RESUMEN

Single-Photon Emission Computed Tomography (SPECT) is widely applied for the diagnosis of coronary artery diseases. Low-dose (LD) SPECT aims to minimize radiation exposure but leads to increased image noise. Limited-view (LV) SPECT, such as the latest GE MyoSPECT ES system, enables accelerated scanning and reduces hardware expenses but degrades reconstruction accuracy. Additionally, Computed Tomography (CT) is commonly used to derive attenuation maps (µ-maps) for attenuation correction (AC) of cardiac SPECT, but it will introduce additional radiation exposure and SPECT-CT misalignments. Although various methods have been developed to solely focus on LD denoising, LV reconstruction, or CT-free AC in SPECT, the solution for simultaneously addressing these tasks remains challenging and under-explored. Furthermore, it is essential to explore the potential of fusing cross-domain and cross-modality information across these interrelated tasks to further enhance the accuracy of each task. Thus, we propose a Dual-Domain Coarse-to-Fine Progressive Network (DuDoCFNet), a multi-task learning method for simultaneous LD denoising, LV reconstruction, and CT-free µ-map generation of cardiac SPECT. Paired dual-domain networks in DuDoCFNet are cascaded using a multi-layer fusion mechanism for cross-domain and cross-modality feature fusion. Two-stage progressive learning strategies are applied in both projection and image domains to achieve coarse-to-fine estimations of SPECT projections and CT-derived µ-maps. Our experiments demonstrate DuDoCFNet's superior accuracy in estimating projections, generating µ-maps, and AC reconstructions compared to existing single- or multi-task learning methods, under various iterations and LD levels. The source code of this work is available at https://github.com/XiongchaoChen/DuDoCFNet-MultiTask.

3.
IEEE Trans Radiat Plasma Med Sci ; 7(5): 465-472, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-37997577

RESUMEN

FDG parametric Ki images show great advantage over static SUV images, due to the higher contrast and better accuracy in tracer uptake rate estimation. In this study, we explored the feasibility of generating synthetic Ki images from static SUV ratio (SUVR) images using three configurations of U-Nets with different sets of input and output image patches, which were the U-Nets with single input and single output (SISO), multiple inputs and single output (MISO), and single input and multiple outputs (SIMO). SUVR images were generated by averaging three 5-min dynamic SUV frames starting at 60 minutes post-injection, and then normalized by the mean SUV values in the blood pool. The corresponding ground truth Ki images were derived using Patlak graphical analysis with input functions from measurement of arterial blood samples. Even though the synthetic Ki values were not quantitatively accurate compared with ground truth, the linear regression analysis of joint histograms in the voxels of body regions showed that the mean R2 values were higher between U-Net prediction and ground truth (0.596, 0.580, 0.576 in SISO, MISO and SIMO), than that between SUVR and ground truth Ki (0.571). In terms of similarity metrics, the synthetic Ki images were closer to the ground truth Ki images (mean SSIM = 0.729, 0.704, 0.704 in SISO, MISO and MISO) than the input SUVR images (mean SSIM = 0.691). Therefore, it is feasible to use deep learning networks to estimate surrogate map of parametric Ki images from static SUVR images.

4.
IEEE Trans Radiat Plasma Med Sci ; 7(3): 284-295, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37789946

RESUMEN

Positron emission tomography (PET) with a reduced injection dose, i.e., low-dose PET, is an efficient way to reduce radiation dose. However, low-dose PET reconstruction suffers from a low signal-to-noise ratio (SNR), affecting diagnosis and other PET-related applications. Recently, deep learning-based PET denoising methods have demonstrated superior performance in generating high-quality reconstruction. However, these methods require a large amount of representative data for training, which can be difficult to collect and share due to medical data privacy regulations. Moreover, low-dose PET data at different institutions may use different low-dose protocols, leading to non-identical data distribution. While previous federated learning (FL) algorithms enable multi-institution collaborative training without the need of aggregating local data, it is challenging for previous methods to address the large domain shift caused by different low-dose PET settings, and the application of FL to PET is still under-explored. In this work, we propose a federated transfer learning (FTL) framework for low-dose PET denoising using heterogeneous low-dose data. Our experimental results on simulated multi-institutional data demonstrate that our method can efficiently utilize heterogeneous low-dose data without compromising data privacy for achieving superior low-dose PET denoising performance for different institutions with different low-dose settings, as compared to previous FL methods.

5.
Med Image Anal ; 90: 102993, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37827110

RESUMEN

Low-count PET is an efficient way to reduce radiation exposure and acquisition time, but the reconstructed images often suffer from low signal-to-noise ratio (SNR), thus affecting diagnosis and other downstream tasks. Recent advances in deep learning have shown great potential in improving low-count PET image quality, but acquiring a large, centralized, and diverse dataset from multiple institutions for training a robust model is difficult due to privacy and security concerns of patient data. Moreover, low-count PET data at different institutions may have different data distribution, thus requiring personalized models. While previous federated learning (FL) algorithms enable multi-institution collaborative training without the need of aggregating local data, addressing the large domain shift in the application of multi-institutional low-count PET denoising remains a challenge and is still highly under-explored. In this work, we propose FedFTN, a personalized federated learning strategy that addresses these challenges. FedFTN uses a local deep feature transformation network (FTN) to modulate the feature outputs of a globally shared denoising network, enabling personalized low-count PET denoising for each institution. During the federated learning process, only the denoising network's weights are communicated and aggregated, while the FTN remains at the local institutions for feature transformation. We evaluated our method using a large-scale dataset of multi-institutional low-count PET imaging data from three medical centers located across three continents, and showed that FedFTN provides high-quality low-count PET images, outperforming previous baseline FL reconstruction methods across all low-count levels at all three institutions.


Asunto(s)
Algoritmos , Tomografía de Emisión de Positrones , Humanos , Procesamiento de Imagen Asistido por Computador , Relación Señal-Ruido
6.
IEEE Trans Med Imaging ; PP2023 Jun 27.
Artículo en Inglés | MEDLINE | ID: mdl-37368811

RESUMEN

In whole-body dynamic positron emission tomography (PET), inter-frame subject motion causes spatial misalignment and affects parametric imaging. Many of the current deep learning inter-frame motion correction techniques focus solely on the anatomy-based registration problem, neglecting the tracer kinetics that contains functional information. To directly reduce the Patlak fitting error for 18F-FDG and further improve model performance, we propose an interframe motion correction framework with Patlak loss optimization integrated into the neural network (MCP-Net). The MCP-Net consists of a multiple-frame motion estimation block, an image-warping block, and an analytical Patlak block that estimates Patlak fitting using motion-corrected frames and the input function. A novel Patlak loss penalty component utilizing mean squared percentage fitting error is added to the loss function to reinforce the motion correction. The parametric images were generated using standard Patlak analysis following motion correction. Our framework enhanced the spatial alignment in both dynamic frames and parametric images and lowered normalized fitting error when compared to both conventional and deep learning benchmarks. MCP-Net also achieved the lowest motion prediction error and showed the best generalization capability. The potential of enhancing network performance and improving the quantitative accuracy of dynamic PET by directly utilizing tracer kinetics is suggested.

7.
Med Image Anal ; 88: 102840, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37216735

RESUMEN

Myocardial perfusion imaging (MPI) using single-photon emission computed tomography (SPECT) is widely applied for the diagnosis of cardiovascular diseases. Attenuation maps (µ-maps) derived from computed tomography (CT) are utilized for attenuation correction (AC) to improve the diagnostic accuracy of cardiac SPECT. However, in clinical practice, SPECT and CT scans are acquired sequentially, potentially inducing misregistration between the two images and further producing AC artifacts. Conventional intensity-based registration methods show poor performance in the cross-modality registration of SPECT and CT-derived µ-maps since the two imaging modalities might present totally different intensity patterns. Deep learning has shown great potential in medical imaging registration. However, existing deep learning strategies for medical image registration encoded the input images by simply concatenating the feature maps of different convolutional layers, which might not fully extract or fuse the input information. In addition, deep-learning-based cross-modality registration of cardiac SPECT and CT-derived µ-maps has not been investigated before. In this paper, we propose a novel Dual-Channel Squeeze-Fusion-Excitation (DuSFE) co-attention module for the cross-modality rigid registration of cardiac SPECT and CT-derived µ-maps. DuSFE is designed based on the co-attention mechanism of two cross-connected input data streams. The channel-wise or spatial features of SPECT and µ-maps are jointly encoded, fused, and recalibrated in the DuSFE module. DuSFE can be flexibly embedded at multiple convolutional layers to enable gradual feature fusion in different spatial dimensions. Our studies using clinical patient MPI studies demonstrated that the DuSFE-embedded neural network generated significantly lower registration errors and more accurate AC SPECT images than existing methods. We also showed that the DuSFE-embedded network did not over-correct or degrade the registration performance of motion-free cases. The source code of this work is available at https://github.com/XiongchaoChen/DuSFE_CrossRegistration.


Asunto(s)
Tomografía Computarizada de Emisión de Fotón Único , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Tomografía Computarizada de Emisión de Fotón Único/métodos , Corazón , Fantasmas de Imagen , Procesamiento de Imagen Asistido por Computador/métodos
8.
J Nucl Cardiol ; 30(5): 1859-1878, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-35680755

RESUMEN

Attenuation correction (AC) is essential for quantitative analysis and clinical diagnosis of single-photon emission computed tomography (SPECT) and positron emission tomography (PET). In clinical practice, computed tomography (CT) is utilized to generate attenuation maps (µ-maps) for AC of hybrid SPECT/CT and PET/CT scanners. However, CT-based AC methods frequently produce artifacts due to CT artifacts and misregistration of SPECT-CT and PET-CT scans. Segmentation-based AC methods using magnetic resonance imaging (MRI) for PET/MRI scanners are inaccurate and complicated since MRI does not contain direct information of photon attenuation. Computational AC methods for SPECT and PET estimate attenuation coefficients directly from raw emission data, but suffer from low accuracy, cross-talk artifacts, high computational complexity, and high noise level. The recently evolving deep-learning-based methods have shown promising results in AC of SPECT and PET, which can be generally divided into two categories: indirect and direct strategies. Indirect AC strategies apply neural networks to transform emission, transmission, or MR images into synthetic µ-maps or CT images which are then incorporated into AC reconstruction. Direct AC strategies skip the intermediate steps of generating µ-maps or CT images and predict AC SPECT or PET images from non-attenuation-correction (NAC) SPECT or PET images directly. These deep-learning-based AC methods show comparable and even superior performance to non-deep-learning methods. In this article, we first discussed the principles and limitations of non-deep-learning AC methods, and then reviewed the status and prospects of deep-learning-based methods for AC of SPECT and PET.


Asunto(s)
Aprendizaje Profundo , Tomografía Computarizada por Tomografía de Emisión de Positrones , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía de Emisión de Positrones/métodos , Tomografía Computarizada de Emisión de Fotón Único , Imagen por Resonancia Magnética/métodos
9.
IEEE Trans Med Imaging ; 42(5): 1325-1336, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36459599

RESUMEN

In nuclear imaging, limited resolution causes partial volume effects (PVEs) that affect image sharpness and quantitative accuracy. Partial volume correction (PVC) methods incorporating high-resolution anatomical information from CT or MRI have been demonstrated to be effective. However, such anatomical-guided methods typically require tedious image registration and segmentation steps. Accurately segmented organ templates are also hard to obtain, particularly in cardiac SPECT imaging, due to the lack of hybrid SPECT/CT scanners with high-end CT and associated motion artifacts. Slight mis-registration/mis-segmentation would result in severe degradation in image quality after PVC. In this work, we develop a deep-learning-based method for fast cardiac SPECT PVC without anatomical information and associated organ segmentation. The proposed network involves a densely-connected multi-dimensional dynamic mechanism, allowing the convolutional kernels to be adapted based on the input images, even after the network is fully trained. Intramyocardial blood volume (IMBV) is introduced as an additional clinical-relevant loss function for network optimization. The proposed network demonstrated promising performance on 28 canine studies acquired on a GE Discovery NM/CT 570c dedicated cardiac SPECT scanner with a 64-slice CT using Technetium-99m-labeled red blood cells. This work showed that the proposed network with densely-connected dynamic mechanism produced superior results compared with the same network without such mechanism. Results also showed that the proposed network without anatomical information could produce images with statistically comparable IMBV measurements to the images generated by anatomical-guided PVC methods, which could be helpful in clinical translation.


Asunto(s)
Algoritmos , Tomografía Computarizada de Emisión de Fotón Único , Animales , Perros , Artefactos , Técnicas de Imagen Cardíaca , Eritrocitos
10.
J Nucl Cardiol ; 30(1): 86-100, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-35508796

RESUMEN

BACKGROUND: The GE Discovery NM (DNM) 530c/570c are dedicated cardiac SPECT scanners with 19 detector modules designed for stationary imaging. This study aims to incorporate additional projection angular sampling to improve reconstruction quality. A deep learning method is also proposed to generate synthetic dense-view image volumes from few-view counterparts. METHODS: By moving the detector array, a total of four projection angle sets were acquired and combined for image reconstructions. A deep neural network is proposed to generate synthetic four-angle images with 76 ([Formula: see text]) projections from corresponding one-angle images with 19 projections. Simulated data, pig, physical phantom, and human studies were used for network training and evaluation. Reconstruction results were quantitatively evaluated using representative image metrics. The myocardial perfusion defect size of different subjects was quantified using an FDA-cleared clinical software. RESULTS: Multi-angle reconstructions and network results have higher image resolution, improved uniformity on normal myocardium, more accurate defect quantification, and superior quantitative values on all the testing data. As validated against cardiac catheterization and diagnostic results, deep learning results showed improved image quality with better defect contrast on human studies. CONCLUSION: Increasing angular sampling can substantially improve image quality on DNM, and deep learning can be implemented to improve reconstruction quality in case of stationary imaging.


Asunto(s)
Aprendizaje Profundo , Humanos , Animales , Porcinos , Tomografía Computarizada de Emisión de Fotón Único/métodos , Tomografía Computarizada por Rayos X/métodos , Fantasmas de Imagen , Procesamiento de Imagen Asistido por Computador/métodos
11.
Med Phys ; 50(1): 89-103, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36048541

RESUMEN

PURPOSE: Myocardial perfusion imaging (MPI) using single-photon emission-computed tomography (SPECT) is widely applied for the diagnosis of cardiovascular diseases. In clinical practice, the long scanning procedures and acquisition time might induce patient anxiety and discomfort, motion artifacts, and misalignments between SPECT and computed tomography (CT). Reducing the number of projection angles provides a solution that results in a shorter scanning time. However, fewer projection angles might cause lower reconstruction accuracy, higher noise level, and reconstruction artifacts due to reduced angular sampling. We developed a deep-learning-based approach for high-quality SPECT image reconstruction using sparsely sampled projections. METHODS: We proposed a novel deep-learning-based dual-domain sinogram synthesis (DuDoSS) method to recover full-view projections from sparsely sampled projections of cardiac SPECT. DuDoSS utilized the SPECT images predicted in the image domain as guidance to generate synthetic full-view projections in the sinogram domain. The synthetic projections were then reconstructed into non-attenuation-corrected and attenuation-corrected (AC) SPECT images for voxel-wise and segment-wise quantitative evaluations in terms of normalized mean square error (NMSE) and absolute percent error (APE). Previous deep-learning-based approaches, including direct sinogram generation (Direct Sino2Sino) and direct image prediction (Direct Img2Img), were tested in this study for comparison. The dataset used in this study included a total of 500 anonymized clinical stress-state MPI studies acquired on a GE NM/CT 850 scanner with 60 projection angles following the injection of 99m Tc-tetrofosmin. RESULTS: Our proposed DuDoSS generated more consistent synthetic projections and SPECT images with the ground truth than other approaches. The average voxel-wise NMSE between the synthetic projections by DuDoSS and the ground-truth full-view projections was 2.08% ± 0.81%, as compared to 2.21% ± 0.86% (p < 0.001) by Direct Sino2Sino. The averaged voxel-wise NMSE between the AC SPECT images by DuDoSS and the ground-truth AC SPECT images was 1.63% ± 0.72%, as compared to 1.84% ± 0.79% (p < 0.001) by Direct Sino2Sino and 1.90% ± 0.66% (p < 0.001) by Direct Img2Img. The averaged segment-wise APE between the AC SPECT images by DuDoSS and the ground-truth AC SPECT images was 3.87% ± 3.23%, as compared to 3.95% ± 3.21% (p = 0.023) by Direct Img2Img and 4.46% ± 3.58% (p < 0.001) by Direct Sino2Sino. CONCLUSIONS: Our proposed DuDoSS is feasible to generate accurate synthetic full-view projections from sparsely sampled projections for cardiac SPECT. The synthetic projections and reconstructed SPECT images generated from DuDoSS are more consistent with the ground-truth full-view projections and SPECT images than other approaches. DuDoSS can potentially enable fast data acquisition of cardiac SPECT.


Asunto(s)
Aprendizaje Profundo , Hominidae , Humanos , Animales , Tomografía Computarizada de Emisión de Fotón Único/métodos , Corazón/diagnóstico por imagen , Tomografía Computarizada por Rayos X , Procesamiento de Imagen Asistido por Computador/métodos
12.
IEEE Trans Radiat Plasma Med Sci ; 7(8): 839-850, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-38745858

RESUMEN

SPECT systems distinguish radionuclides by using multiple energy windows. For CZT detectors, the energy spectrum has a low energy tail leading to additional crosstalk between the radionuclides. Previous work developed models to correct the scatter and crosstalk for CZT-based dedicated cardiac systems with similar 99mTc/123I tracer distributions. These models estimate the primary and scatter components by solving a set of equations employing the MLEM approach. A penalty term is applied to ensure convergence. The present work estimates the penalty term for any 99mTc/123I activity level. An iterative approach incorporating Monte Carlo into the iterative image reconstruction loops was developed to estimate the penalty terms. We used SIMIND and XCAT phantoms in this study. Distribution of tracers in the myocardial tissue and blood pool were varied to simulate a dynamic acquisition. Evaluations of the estimated and the real penalty terms were performed using simulations and large animal data. The myocardium to blood pool ratio was calculated using ROIs in the myocardial tissue and the blood pool for quantitative analysis. All corrected images yielded a good agreement with the gold standard images. In conclusion, we developed a CZT crosstalk correction method for quantitative imaging of 99mTc/123I activity levels by dynamically estimating the penalty terms.

13.
Simul Synth Med Imaging ; 14288: 64-74, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38464964

RESUMEN

The rapid tracer kinetics of rubidium-82 (82Rb) and high variation of cross-frame distribution in dynamic cardiac positron emission tomography (PET) raise significant challenges for inter-frame motion correction, particularly for the early frames where conventional intensity-based image registration techniques are not applicable. Alternatively, a promising approach utilizes generative methods to handle the tracer distribution changes to assist existing registration methods. To improve frame-wise registration and parametric quantification, we propose a Temporally and Anatomically Informed Generative Adversarial Network (TAI-GAN) to transform the early frames into the late reference frame using an all-to-one mapping. Specifically, a feature-wise linear modulation layer encodes channel-wise parameters generated from temporal tracer kinetics information, and rough cardiac segmentations with local shifts serve as the anatomical information. We validated our proposed method on a clinical 82Rb PET dataset and found that our TAI-GAN can produce converted early frames with high image quality, comparable to the real reference frames. After TAI-GAN conversion, motion estimation accuracy and clinical myocardial blood flow (MBF) quantification were improved compared to using the original frames. Our code is published at https://github.com/gxq1998/TAI-GAN.

14.
IEEE Trans Med Imaging ; 41(12): 3587-3599, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-35816532

RESUMEN

To reduce the potential risk of radiation to the patient, low-dose computed tomography (LDCT) has been widely adopted in clinical practice for reconstructing cross-sectional images using sinograms with reduced x-ray flux. The LDCT image quality is often degraded by different levels of noise depending on the low-dose protocols. The image quality will be further degraded when the patient has metallic implants, where the image suffers from additional streak artifacts along with further amplified noise levels, thus affecting the medical diagnosis and other CT-related applications. Previous studies mainly focused either on denoising LDCT without considering metallic implants or full-dose CT metal artifact reduction (MAR). Directly applying previous LDCT or MAR approaches to the issue of simultaneous metal artifact reduction and low-dose CT (MARLD) may yield sub-optimal reconstruction results. In this work, we develop a dual-domain under-to-fully-complete progressive restoration network, called DuDoUFNet, for MARLD. Our DuDoUFNet aims to reconstruct images with substantially reduced noise and artifact by progressive sinogram to image domain restoration with a two-stage progressive restoration network design. Our experimental results demonstrate that our method can provide high-quality reconstruction, superior to previous LDCT and MAR methods under various low-dose and metal settings.


Asunto(s)
Algoritmos , Artefactos , Humanos , Tomografía Computarizada por Rayos X/métodos , Metales , Prótesis e Implantes , Procesamiento de Imagen Asistido por Computador/métodos
15.
J Nucl Cardiol ; 29(6): 3379-3391, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-35474443

RESUMEN

It has been proved feasible to generate attenuation maps (µ-maps) from cardiac SPECT using deep learning. However, this assumed that the training and testing datasets were acquired using the same scanner, tracer, and protocol. We investigated a robust generation of CT-derived µ-maps from cardiac SPECT acquired by different scanners, tracers, and protocols from the training data. We first pre-trained a network using 120 studies injected with 99mTc-tetrofosmin acquired from a GE 850 SPECT/CT with 360-degree gantry rotation, which was then fine-tuned and tested using 80 studies injected with 99mTc-sestamibi acquired from a Philips BrightView SPECT/CT with 180-degree gantry rotation. The error between ground-truth and predicted µ-maps by transfer learning was 5.13 ± 7.02%, as compared to 8.24 ± 5.01% by direct transition without fine-tuning and 6.45 ± 5.75% by limited-sample training. The error between ground-truth and reconstructed images with predicted µ-maps by transfer learning was 1.11 ± 1.57%, as compared to 1.72 ± 1.63% by direct transition and 1.68 ± 1.21% by limited-sample training. It is feasible to apply a network pre-trained by a large amount of data from one scanner to data acquired by another scanner using different tracers and protocols, with proper transfer learning.


Asunto(s)
Radiofármacos , Tecnecio Tc 99m Sestamibi , Humanos , Tomografía Computarizada por Tomografía Computarizada de Emisión de Fotón Único , Aprendizaje Automático , Tomografía Computarizada de Emisión de Fotón Único/métodos
16.
Eur J Nucl Med Mol Imaging ; 49(9): 3046-3060, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35169887

RESUMEN

PURPOSE: Deep-learning-based attenuation correction (AC) for SPECT includes both indirect and direct approaches. Indirect approaches generate attenuation maps (µ-maps) from emission images, while direct approaches predict AC images directly from non-attenuation-corrected (NAC) images without µ-maps. For dedicated cardiac SPECT scanners with CZT detectors, indirect approaches are challenging due to the limited field-of-view (FOV). In this work, we aim to 1) first develop novel indirect approaches to improve the AC performance for dedicated SPECT; and 2) compare the AC performance between direct and indirect approaches for both general purpose and dedicated SPECT. METHODS: For dedicated SPECT, we developed strategies to predict truncated µ-maps from NAC images reconstructed with a small matrix, or full µ-maps from NAC images reconstructed with a large matrix using 270 anonymized clinical studies scanned on a GE Discovery NM/CT 570c SPECT/CT. For general purpose SPECT, we implemented direct and indirect approaches using 400 anonymized clinical studies scanned on a GE NM/CT 850c SPECT/CT. NAC images in both photopeak and scatter windows were input to predict µ-maps or AC images. RESULTS: For dedicated SPECT, the averaged normalized mean square error (NMSE) using our proposed strategies with full µ-maps was 1.20 ± 0.72% as compared to 2.21 ± 1.17% using the previous direct approaches. The polar map absolute percent error (APE) using our approaches was 3.24 ± 2.79% (R2 = 0.9499) as compared to 4.77 ± 3.96% (R2 = 0.9213) using direct approaches. For general purpose SPECT, the averaged NMSE of the predicted AC images using the direct approaches was 2.57 ± 1.06% as compared to 1.37 ± 1.16% using the indirect approaches. CONCLUSIONS: We developed strategies of generating µ-maps for dedicated cardiac SPECT with small FOV. For both general purpose and dedicated SPECT, indirect approaches showed superior performance of AC than direct approaches.


Asunto(s)
Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada de Emisión de Fotón Único/métodos
17.
J Nucl Cardiol ; 29(5): 2235-2250, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34085168

RESUMEN

BACKGROUND: Attenuation correction (AC) using CT transmission scanning enables the accurate quantitative analysis of dedicated cardiac SPECT. However, AC is challenging for SPECT-only scanners. We developed a deep learning-based approach to generate synthetic AC images from SPECT images without AC. METHODS: CT-free AC was implemented using our customized Dual Squeeze-and-Excitation Residual Dense Network (DuRDN). 172 anonymized clinical hybrid SPECT/CT stress/rest myocardial perfusion studies were used in training, validation, and testing. Additional body mass index (BMI), gender, and scatter-window information were encoded as channel-wise input to further improve the network performance. RESULTS: Quantitative and qualitative analysis based on image voxels and 17-segment polar map showed the potential of our approach to generate consistent SPECT AC images. Our customized DuRDN showed superior performance to conventional network design such as U-Net. The averaged voxel-wise normalized mean square error (NMSE) between the predicted AC images by DuRDN and the ground-truth AC images was 2.01 ± 1.01%, as compared to 2.23 ± 1.20% by U-Net. CONCLUSIONS: Our customized DuRDN facilitates dedicated cardiac SPECT AC without CT scanning. DuRDN can efficiently incorporate additional patient information and may achieve better performance compared to conventional U-Net.


Asunto(s)
Tomografía Computarizada de Emisión de Fotón Único , Tomografía Computarizada por Rayos X , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Tomografía Computarizada de Emisión de Fotón Único , Tomografía Computarizada de Emisión de Fotón Único/métodos
18.
Med Image Anal ; 75: 102289, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34758443

RESUMEN

Sparse-view computed tomography (SVCT) aims to reconstruct a cross-sectional image using a reduced number of x-ray projections. While SVCT can efficiently reduce the radiation dose, the reconstruction suffers from severe streak artifacts, and the artifacts are further amplified with the presence of metallic implants, which could adversely impact the medical diagnosis and other downstream applications. Previous methods have extensively explored either SVCT reconstruction without metallic implants, or full-view CT metal artifact reduction (MAR). The issue of simultaneous sparse-view and metal artifact reduction (SVMAR) remains under-explored, and it is infeasible to directly apply previous SVCT and MAR methods to SVMAR which may yield non-ideal reconstruction quality. In this work, we propose a dual-domain data consistent recurrent network, called DuDoDR-Net, for SVMAR. Our DuDoDR-Net aims to reconstruct an artifact-free image by recurrent image domain and sinogram domain restorations. To ensure the metal-free part of acquired projection data is preserved, we also develop the image data consistent layer (iDCL) and sinogram data consistent layer (sDCL) that are interleaved in our recurrent framework. Our experimental results demonstrate that our DuDoDR-Net is able to produce superior artifact-reduced results while preserving the anatomical structures, that outperforming previous SVCT and SVMAR methods, under different sparse-view acquisition settings.


Asunto(s)
Artefactos , Procesamiento de Imagen Asistido por Computador , Algoritmos , Humanos , Fantasmas de Imagen , Tomografía Computarizada por Rayos X
19.
Med Image Comput Comput Assist Interv ; 13434: 163-172, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38464686

RESUMEN

Inter-frame patient motion introduces spatial misalignment and degrades parametric imaging in whole-body dynamic positron emission tomography (PET). Most current deep learning inter-frame motion correction works consider only the image registration problem, ignoring tracer kinetics. We propose an inter-frame Motion Correction framework with Patlak regularization (MCP-Net) to directly optimize the Patlak fitting error and further improve model performance. The MCP-Net contains three modules: a motion estimation module consisting of a multiple-frame 3-D U-Net with a convolutional long short-term memory layer combined at the bottleneck; an image warping module that performs spatial transformation; and an analytical Patlak module that estimates Patlak fitting with the motion-corrected frames and the individual input function. A Patlak loss penalization term using mean squared percentage fitting error is introduced to the loss function in addition to image similarity measurement and displacement gradient loss. Following motion correction, the parametric images were generated by standard Patlak analysis. Compared with both traditional and deep learning benchmarks, our network further corrected the residual spatial mismatch in the dynamic frames, improved the spatial alignment of Patlak Ki/Vb images, and reduced normalized fitting error. With the utilization of tracer dynamics and enhanced network performance, MCP-Net has the potential for further improving the quantitative accuracy of dynamic PET. Our code is released at https://github.com/gxq1998/MCP-Net.

20.
Opt Express ; 29(20): 31754-31766, 2021 Sep 27.
Artículo en Inglés | MEDLINE | ID: mdl-34615262

RESUMEN

We demonstrate an adaptive super-resolution based contact imaging on a CMOS chip to achieve subcellular spatial resolution over a large field of view of ∼24 mm2. By using regular LED illumination, we acquire the single lower-resolution image of the objects placed approximate to the sensor with unit magnification. For the raw contact-mode lens-free image, the pixel size of the sensor chip limits the spatial resolution. We develop a hybrid supervised-unsupervised strategy to train a super-resolution network, circumventing the missing of in-situ ground truth, effectively recovering a much higher resolution image of the objects, permitting sub-micron spatial resolution to be achieved across the entire sensor chip active area. We demonstrate the success of this approach by imaging the proliferation dynamics of cells directly cultured on the chip.


Asunto(s)
Células Endoteliales de la Vena Umbilical Humana , Aumento de la Imagen/métodos , Espacio Intracelular/diagnóstico por imagen , Iluminación/métodos , Microscopía/métodos , Algoritmos , Técnicas de Cultivo de Célula , Proliferación Celular , Humanos , Aumento de la Imagen/instrumentación , Lentes , Microscopía/instrumentación , Redes Neurales de la Computación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA