Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 98
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38814764

RESUMO

Positron emission tomography/magnetic resonance imaging (PET/MRI) systems can provide precise anatomical and functional information with exceptional sensitivity and accuracy for neurological disorder detection. Nevertheless, the radiation exposure risks and economic costs of radiopharmaceuticals may pose significant burdens on patients. To mitigate image quality degradation during low-dose PET imaging, we proposed a novel 3D network equipped with a spatial brain transform (SBF) module for low-dose whole-brain PET and MR images to synthesize high-quality PET images. The FreeSurfer toolkit was applied to derive the spatial brain anatomical alignment information, which was then fused with low-dose PET and MR features through the SBF module. Moreover, several deep learning methods were employed as comparison measures to evaluate the model performance, with the peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and Pearson correlation coefficient (PCC) serving as quantitative metrics. Both the visual results and quantitative results illustrated the effectiveness of our approach. The obtained PSNR and SSIM were 41.96 ±4.91 dB (p<0.01) and 0.9654 ±0.0215 (p<0.01), which achieved a 19% and 20% improvement, respectively, compared to the original low-dose brain PET images. The volume of interest (VOI) analysis of brain regions such as the left thalamus (PCC = 0.959) also showed that the proposed method could achieve a more accurate standardized uptake value (SUV) distribution while preserving the details of brain structures. In future works, we hope to apply our method to other multimodal systems, such as PET/CT, to assist clinical brain disease diagnosis and treatment.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38805334

RESUMO

Nasopharyngeal carcinoma (NPC) is a malignant tumor primarily treated by radiotherapy. Accurate delineation of the target tumor is essential for improving the effectiveness of radiotherapy. However, the segmentation performance of current models is unsatisfactory due to poor boundaries, large-scale tumor volume variation, and the labor-intensive nature of manual delineation for radiotherapy. In this paper, MMCA-Net, a novel segmentation network for NPC using PET/CT images that incorporates an innovative multimodal cross attention transformer (MCA-Transformer) and a modified U-Net architecture, is introduced to enhance modal fusion by leveraging cross-attention mechanisms between CT and PET data. Our method, tested against ten algorithms via fivefold cross-validation on samples from Sun Yat-sen University Cancer Center and the public HECKTOR dataset, consistently topped all four evaluation metrics with average Dice similarity coefficients of 0.815 and 0.7944, respectively. Furthermore, ablation experiments were conducted to demonstrate the superiority of our method over multiple baseline and variant techniques. The proposed method has promising potential for application in other tasks.

3.
Eur Radiol ; 2024 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-38355987

RESUMO

OBJECTIVES: Total-body PET/CT scanners with long axial fields of view have enabled unprecedented image quality and quantitative accuracy. However, the ionizing radiation from CT is a major issue in PET imaging, which is more evident with reduced radiopharmaceutical doses in total-body PET/CT. Therefore, we attempted to generate CT-free attenuation-corrected (CTF-AC) total-body PET images through deep learning. METHODS: Based on total-body PET data from 122 subjects (29 females and 93 males), a well-established cycle-consistent generative adversarial network (Cycle-GAN) was employed to generate CTF-AC total-body PET images directly while introducing site structures as prior information. Statistical analyses, including Pearson correlation coefficient (PCC) and t-tests, were utilized for the correlation measurements. RESULTS: The generated CTF-AC total-body PET images closely resembled real AC PET images, showing reduced noise and good contrast in different tissue structures. The obtained peak signal-to-noise ratio and structural similarity index measure values were 36.92 ± 5.49 dB (p < 0.01) and 0.980 ± 0.041 (p < 0.01), respectively. Furthermore, the standardized uptake value (SUV) distribution was consistent with that of real AC PET images. CONCLUSION: Our approach could directly generate CTF-AC total-body PET images, greatly reducing the radiation risk to patients from redundant anatomical examinations. Moreover, the model was validated based on a multidose-level NAC-AC PET dataset, demonstrating the potential of our method for low-dose PET attenuation correction. In future work, we will attempt to validate the proposed method with total-body PET/CT systems in more clinical practices. CLINICAL RELEVANCE STATEMENT: The ionizing radiation from CT is a major issue in PET imaging, which is more evident with reduced radiopharmaceutical doses in total-body PET/CT. Our CT-free PET attenuation correction method would be beneficial for a wide range of patient populations, especially for pediatric examinations and patients who need multiple scans or who require long-term follow-up. KEY POINTS: • CT is the main source of radiation in PET/CT imaging, especially for total-body PET/CT devices, and reduced radiopharmaceutical doses make the radiation burden from CT more obvious. • The CT-free PET attenuation correction method would be beneficial for patients who need multiple scans or long-term follow-up by reducing additional radiation from redundant anatomical examinations. • The proposed method could directly generate CT-free attenuation-corrected (CTF-AC) total-body PET images, which is beneficial for PET/MRI or PET-only devices lacking CT image poses.

4.
Quant Imaging Med Surg ; 14(2): 2008-2020, 2024 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-38415166

RESUMO

Background: The use of segmentation architectures in medical imaging, particularly for glioma diagnosis, marks a significant advancement in the field. Traditional methods often rely on post-processed images; however, key details can be lost during the fast Fourier transformation (FFT) process. Given the limitations of these techniques, there is a growing interest in exploring more direct approaches. The adaption of segmentation architectures originally designed for road extraction for medical imaging represents an innovative step in this direction. By employing K-space data as the modal input, this method completely eliminates the information loss inherent in FFT, thereby potentially enhancing the precision and effectiveness of glioma diagnosis. Methods: In the study, a novel architecture based on a deep-residual U-net was developed to accomplish the challenging task of automatically segmenting brain tumors from K-space data. Brain tumors from K-space data with different under-sampling rates were also segmented to verify the clinical application of our method. Results: Compared to the benchmarks set in the 2018 Brain Tumor Segmentation (BraTS) Challenge, our proposed architecture had superior performance, achieving Dice scores of 0.8573, 0.8789, and 0.7765 for the whole tumor (WT), tumor core (TC), and enhanced tumor (ET) regions, respectively. The corresponding Hausdorff distances were 2.5649, 1.6146, and 2.7187 for the WT, TC, and ET regions, respectively. Notably, compared to traditional image-based approaches, the architecture also exhibited an improvement of approximately 10% in segmentation accuracy on the K-space data at different under-sampling rates. Conclusions: These results show the superiority of our method compared to previous methods. The direct performance of lesion segmentation based on K-space data eliminates the time-consuming and tedious image reconstruction process, thus enabling the segmentation task to be accomplished more efficiently.

5.
Med Phys ; 51(4): 2788-2805, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38189528

RESUMO

BACKGROUND: Accurate segmentation of lung nodules is crucial for the early diagnosis and treatment of lung cancer in clinical practice. However, the similarity between lung nodules and surrounding tissues has made their segmentation a longstanding challenge. PURPOSE: Existing deep learning and active contour models each have their limitations. This paper aims to integrate the strengths of both approaches while mitigating their respective shortcomings. METHODS: In this paper, we propose a few-shot segmentation framework that combines a deep neural network with an active contour model. We introduce heat kernel convolutions and high-order total variation into the active contour model and solve the challenging nonsmooth optimization problem using the alternating direction method of multipliers. Additionally, we use the presegmentation results obtained from training a deep neural network on a small sample set as the initial contours for our optimized active contour model, addressing the difficulty of manually setting the initial contours. RESULTS: We compared our proposed method with state-of-the-art methods for segmentation effectiveness using clinical computed tomography (CT) images acquired from two different hospitals and the publicly available LIDC dataset. The results demonstrate that our proposed method achieved outstanding segmentation performance according to both visual and quantitative indicators. CONCLUSION: Our approach utilizes the output of few-shot network training as prior information, avoiding the need to select the initial contour in the active contour model. Additionally, it provides mathematical interpretability to the deep learning, reducing its dependency on the quantity of training samples.


Assuntos
Neoplasias Pulmonares , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Redes Neurais de Computação , Pulmão , Processamento de Imagem Assistida por Computador/métodos
6.
EJNMMI Phys ; 11(1): 1, 2024 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-38165551

RESUMO

OBJECTIVES: This study aims to decrease the scan time and enhance image quality in pediatric total-body PET imaging by utilizing multimodal artificial intelligence techniques. METHODS: A total of 270 pediatric patients who underwent total-body PET/CT scans with a uEXPLORER at the Sun Yat-sen University Cancer Center were retrospectively enrolled. 18F-fluorodeoxyglucose (18F-FDG) was administered at a dose of 3.7 MBq/kg with an acquisition time of 600 s. Short-term scan PET images (acquired within 6, 15, 30, 60 and 150 s) were obtained by truncating the list-mode data. A three-dimensional (3D) neural network was developed with a residual network as the basic structure, fusing low-dose CT images as prior information, which were fed to the network at different scales. The short-term PET images and low-dose CT images were processed by the multimodal 3D network to generate full-length, high-dose PET images. The nonlocal means method and the same 3D network without the fused CT information were used as reference methods. The performance of the network model was evaluated by quantitative and qualitative analyses. RESULTS: Multimodal artificial intelligence techniques can significantly improve PET image quality. When fused with prior CT information, the anatomical information of the images was enhanced, and 60 s of scan data produced images of quality comparable to that of the full-time data. CONCLUSION: Multimodal artificial intelligence techniques can effectively improve the quality of pediatric total-body PET/CT images acquired using ultrashort scan times. This has the potential to decrease the use of sedation, enhance guardian confidence, and reduce the probability of motion artifacts.

7.
Quant Imaging Med Surg ; 14(1): 335-351, 2024 Jan 03.
Artigo em Inglês | MEDLINE | ID: mdl-38223072

RESUMO

Background: In low-dose computed tomography (LDCT) lung cancer screening, soft tissue is hardly appreciable due to high noise levels. While deep learning-based LDCT denoising methods have shown promise, they typically rely on structurally aligned synthesized paired data, which lack consideration of the clinical reality that there are no aligned LDCT and normal-dose CT (NDCT) images available. This study introduces an LDCT denoising method using clinically structure-unaligned but paired data sets (LDCT and NDCT scans from the same patients) to improve lesion detection during LDCT lung cancer screening. Methods: A cohort of 64 patients undergoing both LDCT and NDCT was randomly divided into training (n=46) and testing (n=18) sets. A two-stage training approach was adopted. First, Gaussian noise was added to NDCT data to create simulated LDCT data for generator training. Then, the model was trained on a clinically structure-unaligned paired data set using a Wasserstein generative adversarial network (WGAN) framework with the initial generator weights obtained during the first stage of training. An attention mechanism was also incorporated into the network. Results: Validated on a clinical CT data set, our proposed method outperformed other available methods [CycleGAN, Pixel2Pixel, block-matching and three-dimensional filtering (BM3D)] in noise removal and detail retention tasks in terms of the peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and root mean square error (RMSE) metrics. Compared with the results produced by BM3D, our method yielded an average improvement of approximately 7% in terms of the three evaluation indicators. The probability density profile of the denoised CT output produced using our method best fit the reference NDCT scan. Additionally, our two-stage model outperformed the one-stage WGAN-based model in both objective and subjective evaluations, further demonstrating the higher effectiveness of our two-stage training approach. Conclusions: The proposed method performed the best in removing noise from LDCT scans and exhibited good detail retention, which could potentially enhance the lesion detection and characterization effects obtained for soft tissues in the scanning scope of LDCT lung cancer screening.

8.
IEEE Trans Med Imaging ; 43(4): 1554-1567, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38096101

RESUMO

The short frames of low-count positron emission tomography (PET) images generally cause high levels of statistical noise. Thus, improving the quality of low-count images by using image postprocessing algorithms to achieve better clinical diagnoses has attracted widespread attention in the medical imaging community. Most existing deep learning-based low-count PET image enhancement methods have achieved satisfying results, however, few of them focus on denoising low-count PET images with the magnetic resonance (MR) image modality as guidance. The prior context features contained in MR images can provide abundant and complementary information for single low-count PET image denoising, especially in ultralow-count (2.5%) cases. To this end, we propose a novel two-stream dual PET/MR cross-modal interactive fusion network with an optical flow pre-alignment module, namely, OIF-Net. Specifically, the learnable optical flow registration module enables the spatial manipulation of MR imaging inputs within the network without any extra training supervision. Registered MR images fundamentally solve the problem of feature misalignment in the multimodal fusion stage, which greatly benefits the subsequent denoising process. In addition, we design a spatial-channel feature enhancement module (SC-FEM) that considers the interactive impacts of multiple modalities and provides additional information flexibility in both the spatial and channel dimensions. Furthermore, instead of simply concatenating two extracted features from these two modalities as an intermediate fusion method, the proposed cross-modal feature fusion module (CM-FFM) adopts cross-attention at multiple feature levels and greatly improves the two modalities' feature fusion procedure. Extensive experimental assessments conducted on real clinical datasets, as well as an independent clinical testing dataset, demonstrate that the proposed OIF-Net outperforms the state-of-the-art methods.


Assuntos
Processamento de Imagem Assistida por Computador , Fluxo Óptico , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem
9.
Phys Med Biol ; 69(2)2024 Jan 12.
Artigo em Inglês | MEDLINE | ID: mdl-37972412

RESUMO

Objective.Nuclei segmentation is crucial for pathologists to accurately classify and grade cancer. However, this process faces significant challenges, such as the complex background structures in pathological images, the high-density distribution of nuclei, and cell adhesion.Approach.In this paper, we present an interactive nuclei segmentation framework that increases the precision of nuclei segmentation. Our framework incorporates expert monitoring to gather as much prior information as possible and accurately segment complex nucleus images through limited pathologist interaction, where only a small portion of the nucleus locations in each image are labeled. The initial contour is determined by the Voronoi diagram generated from the labeled points, which is then input into an optimized weighted convex difference model to regularize partition boundaries in an image. Specifically, we provide theoretical proof of the mathematical model, stating that the objective function monotonically decreases. Furthermore, we explore a postprocessing stage that incorporates histograms, which are simple and easy to handle and prevent arbitrariness and subjectivity in individual choices.Main results.To evaluate our approach, we conduct experiments on both a cervical cancer dataset and a nasopharyngeal cancer dataset. The experimental results demonstrate that our approach achieves competitive performance compared to other methods.Significance.The Voronoi diagram in the paper serves as prior information for the active contour, providing positional information for individual cells. Moreover, the active contour model achieves precise segmentation results while offering mathematical interpretability.


Assuntos
Neoplasias Nasofaríngeas , Neoplasias do Colo do Útero , Feminino , Humanos , Algoritmos , Neoplasias do Colo do Útero/diagnóstico por imagem , Núcleo Celular , Processamento de Imagem Assistida por Computador/métodos
10.
Eur J Nucl Med Mol Imaging ; 51(2): 346-357, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37782321

RESUMO

PURPOSE: Positron emission tomography/magnetic resonance imaging (PET/MRI) is a powerful tool for brain imaging, but the spatial resolution of the PET scanners currently used for brain imaging can be further improved to enhance the quantitative accuracy of brain PET imaging. The purpose of this study is to develop an MR-compatible brain PET scanner that can simultaneously achieve a uniform high spatial resolution and high sensitivity by using dual-ended readout depth encoding detectors. METHODS: The MR-compatible brain PET scanner, named SIAT bPET, consists of 224 dual-ended readout detectors. Each detector contains a 26 × 26 lutetium yttrium oxyorthosilicate (LYSO) crystal array of 1.4 × 1.4 × 20 mm3 crystal size read out by two 10 × 10 silicon photomultiplier (SiPM) arrays from both ends. The scanner has a detector ring diameter of 376.8 mm and an axial field of view (FOV) of 329 mm. The performance of the scanner including spatial resolution, sensitivity, count rate, scatter fraction, and image quality was measured. Imaging studies of phantoms and the brain of a volunteer were performed. The mutual interferences of the PET insert and the uMR790 3 T MRI scanner were measured, and simultaneous PET/MRI imaging of the brain of a volunteer was performed. RESULTS: A spatial resolution of better than 1.5 mm with an average of 1.2 mm within the whole FOV was obtained. A sensitivity of 11.0% was achieved at the center FOV for an energy window of 350-750 keV. Except for the dedicated RF coil, which caused a ~ 30% reduction of the sensitivity of the PET scanner, the MRI sequences running had a negligible effect on the performance of the PET scanner. The reduction of the SNR and homogeneity of the MRI images was less than 2% as the PET scanner was inserted to the MRI scanner and powered-on. High quality PET and MRI images of a human brain were obtained from simultaneous PET/MRI scans. CONCLUSION: The SIAT bPET scanner achieved a spatial resolution and sensitivity better than all MR-compatible brain PET scanners developed up to date. It can be used either as a standalone brain PET scanner or a PET insert placed inside a commercial whole-body MRI scanner to perform simultaneous PET/MRI imaging.


Assuntos
Imageamento por Ressonância Magnética , Tomografia por Emissão de Pósitrons , Humanos , Desenho de Equipamento , Tomografia por Emissão de Pósitrons/métodos , Imagens de Fantasmas , Encéfalo/diagnóstico por imagem
11.
Eur Radiol ; 34(1): 182-192, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37566270

RESUMO

OBJECTIVES: To propose a novel model-free data-driven approach based on the voxel-wise mapping of DCE-MRI time-intensity-curve (TIC) profiles for quantifying and visualizing hemodynamic heterogeneity and to validate its potential clinical applications. MATERIALS AND METHODS: From December 2018 to July 2022, 259 patients with 325 pathologically confirmed breast lesions who underwent breast DCE-MRI were retrospectively enrolled. Based on the manually segmented breast lesions, the TIC of each voxel within the 3D whole lesion was classified into 19 subtypes based on wash-in rate (nonenhanced, slow, medium, and fast), wash-out enhancement (persistent, plateau, and decline), and wash-out stability (steady and unsteady), and the composition ratio of these 19 subtypes for each lesion was calculated as a new feature set (type-19). The three-type TIC classification, semiquantitative parameters, and type-19 features were used to build machine learning models for identifying lesion malignancy and classifying histologic grades, proliferation status, and molecular subtypes. RESULTS: The type-19 feature-based model significantly outperformed models based on the three-type TIC method and semiquantitative parameters both in distinguishing lesion malignancy (respectively; AUC = 0.875 vs. 0.831, p = 0.01 and 0.875vs. 0.804, p = 0.03), predicting tumor proliferation status (AUC = 0.890 vs. 0.548, p = 0.006 and 0.890 vs. 0.596, p = 0.020), but not in predicting histologic grades (p = 0.820 and 0.970). CONCLUSION: In addition to conventional methods, the proposed computational approach provides a novel, model-free, data-driven approach to quantify and visualize hemodynamic heterogeneity. CLINICAL RELEVANCE STATEMENT: Voxel-wise intra-lesion mapping of TIC profiles allows for visualization of hemodynamic heterogeneity and its composition ratio for differentiation of malignant and benign breast lesions. KEY POINTS: • Voxel-wise TIC profiles were mapped, and their composition ratio was compared between various breast lesions. • The model based on the composition ratio of voxel-wise TIC profiles significantly outperformed the three-type TIC classification model and the semiquantitative parameters model in lesion malignancy differentiation and tumor proliferation status prediction in breast lesions. • This novel, data-driven approach allows the intuitive visualization and quantification of the hemodynamic heterogeneity of breast lesions.


Assuntos
Neoplasias da Mama , Neoplasias , Humanos , Feminino , Estudos Retrospectivos , Imageamento por Ressonância Magnética/métodos , Mama/diagnóstico por imagem , Mama/patologia , Tempo , Neoplasias/patologia , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Meios de Contraste
12.
IEEE Trans Med Imaging ; 43(1): 122-134, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37428658

RESUMO

Low-count positron emission tomography (PET) imaging is challenging because of the ill-posedness of this inverse problem. Previous studies have demonstrated that deep learning (DL) holds promise for achieving improved low-count PET image quality. However, almost all data-driven DL methods suffer from fine structure degradation and blurring effects after denoising. Incorporating DL into the traditional iterative optimization model can effectively improve its image quality and recover fine structures, but little research has considered the full relaxation of the model, resulting in the performance of this hybrid model not being sufficiently exploited. In this paper, we propose a learning framework that deeply integrates DL and an alternating direction of multipliers method (ADMM)-based iterative optimization model. The innovative feature of this method is that we break the inherent forms of the fidelity operators and use neural networks to process them. The regularization term is deeply generalized. The proposed method is evaluated on simulated data and real data. Both the qualitative and quantitative results show that our proposed neural network method can outperform partial operator expansion-based neural network methods, neural network denoising methods and traditional methods.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Imagens de Fantasmas , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Algoritmos
13.
Phys Med Biol ; 68(22)2023 Nov 10.
Artigo em Inglês | MEDLINE | ID: mdl-37890466

RESUMO

Objective.Spatial resolution is a crucial parameter for a positron emission tomography (PET) scanner. The spatial resolution of a high-resolution small animal PET scanner is significantly influenced by the effect of depth of interaction (DOI) uncertainty. The aim of this work is to investigate the impact of DOI resolution on the spatial resolution of a small animal PET scanner called SIAT aPET and determine the required DOI resolution to achieve nearly uniform spatial resolution within the field of view (FOV).Approach. The SIAT aPET detectors utilize 1.0 × 1.0 × 20 mm3crystals, with an average DOI resolution of ∼2 mm. A default number of 16 DOI bins are used during data acquisition. First, a Na-22 point source was scanned in the center of the axial FOV with different radial offsets. Then, a Derenzo phantom was scanned at radial offsets of 0 and 15 mm in the center axial FOV. The measured DOI information was rebinned to 1, 2, 4 and 8 DOI bins to mimic different DOI resolutions of the detectors during image reconstruction.Main results. Significant artifacts were observed in images obtained from both the point source and Derenzo phantom when using only one DOI bin. When accurate measurement of DOI is not achieved, degradation in spatial resolution is more pronounced in the radial direction compared to tangential and axial directions for large radial offsets. The radial spatial resolutions at a 30 mm radial offset are 5.05, 2.62, 1.24, 0.86 and 0.78 mm when using 1, 2, 4, 8, or 16 DOI bins, respectively. The axial spatial resolution improved from ∼1.3 to 0.7 mm as the number of DOI bins increased from 1 to 16 at radial offsets from 0 to 25 mm. Two DOI bins are required to obtain images without significant artifacts. The required DOI resolution is about three times the crystal width of SIAT aPET to achieve a uniform submillimeter spatial resolution within the central 60 mm FOV and resolve the 1 mm rods of the Derenzo phantom at both positions.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Animais , Desenho de Equipamento , Tomografia por Emissão de Pósitrons/métodos , Imagens de Fantasmas
14.
EJNMMI Phys ; 10(1): 67, 2023 Oct 24.
Artigo em Inglês | MEDLINE | ID: mdl-37874426

RESUMO

BACKGROUND: Dynamic positron emission tomography (PET) images are useful in clinical practice because they can be used to calculate the metabolic parameters (Ki) of tissues using graphical methods (such as Patlak plots). Ki is more stable than the standard uptake value and has a good reference value for clinical diagnosis. However, the long scanning time required for obtaining dynamic PET images, usually an hour, makes this method less useful in some ways. There is a tradeoff between the scan durations and the signal-to-noise ratios (SNRs) of Ki images. The purpose of our study is to obtain approximately the same image as that produced by scanning for one hour in just half an hour, improving the SNRs of images obtained by scanning for 30 min and reducing the necessary 1-h scanning time for acquiring dynamic PET images. METHODS: In this paper, we use U-Net as a feature extractor to obtain feature vectors with a priori knowledge about the image structure of interest and then utilize a parameter generator to obtain five parameters for a two-tissue, three-compartment model and generate a time activity curve (TAC), which will become close to the original 1-h TAC through training. The above-generated dynamic PET image finally obtains the Ki parameter image. RESULTS: A quantitative analysis showed that the network-generated Ki parameter maps improved the structural similarity index measure and peak SNR by averages of 2.27% and 7.04%, respectively, and decreased the root mean square error (RMSE) by 16.3% compared to those generated with a scan time of 30 min. CONCLUSIONS: The proposed method is feasible, and satisfactory PET quantification accuracy can be achieved using the proposed deep learning method. Further clinical validation is needed before implementing this approach in routine clinical applications.

15.
Eur J Nucl Med Mol Imaging ; 51(1): 27-39, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37672046

RESUMO

PURPOSE: The axial field of view (AFOV) of a positron emission tomography (PET) scanner greatly affects the quality of PET images. Although a total-body PET scanner (uEXPLORER) with a large AFOV is more sensitive, it is more expensive and difficult to widely use. Therefore, we attempt to utilize high-quality images generated by uEXPLORER to optimize the quality of images from short-axis PET scanners through deep learning technology while controlling costs. METHODS: The experiments were conducted using PET images of three anatomical locations (brain, lung, and abdomen) from 335 patients. To simulate PET images from different axes, two protocols were used to obtain PET image pairs (each patient was scanned once). For low-quality PET (LQ-PET) images with a 320-mm AFOV, we applied a 300-mm FOV for brain reconstruction and a 500-mm FOV for lung and abdomen reconstruction. For high-quality PET (HQ-PET) images, we applied a 1940-mm AFOV during the reconstruction process. A 3D Unet was utilized to learn the mapping relationship between LQ-PET and HQ-PET images. In addition, the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) were employed to evaluate the model performance. Furthermore, two nuclear medicine doctors evaluated the image quality based on clinical readings. RESULTS: The generated PET images of the brain, lung, and abdomen were quantitatively and qualitatively compatible with the HQ-PET images. In particular, our method achieved PSNR values of 35.41 ± 5.45 dB (p < 0.05), 33.77 ± 6.18 dB (p < 0.05), and 38.58 ± 7.28 dB (p < 0.05) for the three beds. The overall mean SSIM was greater than 0.94 for all patients who underwent testing. Moreover, the total subjective quality levels of the generated PET images for three beds were 3.74 ± 0.74, 3.69 ± 0.81, and 3.42 ± 0.99 (the highest possible score was 5, and the minimum score was 1) from two experienced nuclear medicine experts. Additionally, we evaluated the distribution of quantitative standard uptake values (SUV) in the region of interest (ROI). Both the SUV distribution and the peaks of the profile show that our results are consistent with the HQ-PET images, proving the superiority of our approach. CONCLUSION: The findings demonstrate the potential of the proposed technique for improving the image quality of a PET scanner with a 320 mm or even shorter AFOV. Furthermore, this study explored the potential of utilizing uEXPLORER to achieve improved short-axis PET image quality at a limited economic cost, and computer-aided diagnosis systems that are related can help patients and radiologists.


Assuntos
Aprendizado Profundo , Humanos , Melhoria de Qualidade , Tomografia por Emissão de Pósitrons/métodos , Encéfalo , Imagens de Fantasmas , Processamento de Imagem Assistida por Computador/métodos
16.
Artif Intell Med ; 143: 102609, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37673577

RESUMO

Low-dose CT techniques attempt to minimize the radiation exposure of patients by estimating the high-resolution normal-dose CT images to reduce the risk of radiation-induced cancer. In recent years, many deep learning methods have been proposed to solve this problem by building a mapping function between low-dose CT images and their high-dose counterparts. However, most of these methods ignore the effect of different radiation doses on the final CT images, which results in large differences in the intensity of the noise observable in CT images. What'more, the noise intensity of low-dose CT images exists significantly differences under different medical devices manufacturers. In this paper, we propose a multi-level noise-aware network (MLNAN) implemented with constrained cycle Wasserstein generative adversarial networks to recovery the low-dose CT images under uncertain noise levels. Particularly, the noise-level classification is predicted and reused as a prior pattern in generator networks. Moreover, the discriminator network introduces noise-level determination. Under two dose-reduction strategies, experiments to evaluate the performance of proposed method are conducted on two datasets, including the simulated clinical AAPM challenge datasets and commercial CT datasets from United Imaging Healthcare (UIH). The experimental results illustrate the effectiveness of our proposed method in terms of noise suppression and structural detail preservation compared with several other deep-learning based methods. Ablation studies validate the effectiveness of the individual components regarding the afforded performance improvement. Further research for practical clinical applications and other medical modalities is required in future works.


Assuntos
Exposição à Radiação , Humanos , Exposição à Radiação/prevenção & controle , Incerteza , Tomografia Computadorizada por Raios X
17.
Br J Radiol ; 96(1149): 20230038, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37393527

RESUMO

OBJECTIVES: Our work aims to study the feasibility of a deep learning algorithm to reduce the 68Ga-FAPI radiotracer injected activity and/or shorten the scanning time and to investigate its effects on image quality and lesion detection ability. METHODS: The data of 130 patients who underwent 68Ga-FAPI positron emission tomography (PET)/CT in two centers were studied. Predicted full-dose images (DL-22%, DL-28% and DL-33%) were obtained from three groups of low-dose images using a deep learning method and compared with the standard-dose images (raw data). Injection activity for full-dose images was 2.16 ± 0.61 MBq/kg. The quality of the predicted full-dose PET images was subjectively evaluated by two nuclear physicians using a 5-point Likert scale, and objectively evaluated by the peak signal-to-noise ratio, structural similarity index and root mean square error. The maximum standardized uptake value and the mean standardized uptake value (SUVmean) were used to quantitatively analyze the four volumes of interest (the brain, liver, left lung and right lung) and all lesions, and the lesion detection rate was calculated. RESULTS: Data showed that the DL-33% images of the two test data sets met the clinical diagnosis requirements, and the overall lesion detection rate of the two centers reached 95.9%. CONCLUSION: Through deep learning, we demonstrated that reducing the 68Ga-FAPI injected activity and/or shortening the scanning time in PET/CT imaging was feasible. In addition, 68Ga-FAPI dose as low as 33% of the standard dose maintained acceptable image quality. ADVANCES IN KNOWLEDGE: This is the first study of low-dose 68Ga-FAPI PET images from two centers using a deep learning algorithm.


Assuntos
Aprendizado Profundo , Radioisótopos de Gálio , Humanos , Estudos de Viabilidade , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Tomografia por Emissão de Pósitrons , Algoritmos , Fluordesoxiglucose F18
18.
Quant Imaging Med Surg ; 13(7): 4447-4462, 2023 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-37456307

RESUMO

Background: Brain structure segmentation is of great value in diagnosing brain disorders, allowing radiologists to quickly acquire regions of interest and assist in subsequent analyses, diagnoses and treatment. Current brain structure segmentation methods are usually applied to magnetic resonance (MR) images, which provide higher soft tissue contrast and better spatial resolution. However, fewer segmentation methods are conducted on a positron emission tomography/magnetic resonance imaging (PET/MRI) system that combines functional and structural information to improve analysis accuracy. Methods: In this paper, we explore a dual-modality image segmentation model to segment brain 18F-fluorodeoxyglucose (18F-FDG) PET/MR images based on the U-Net architecture. This model takes registered PET and MR images as parallel inputs, and four evaluation metrics (Dice score, Jaccard coefficient, precision and sensitivity) are used to evaluate segmentation performance. Moreover, we also compared the proposed approach with other single-modality segmentation strategies, including PET-only segmentation and MRI-only segmentation. Results: The experiments were conducted on the clinical head data of 120 patients, and the results show that the proposed algorithm accurately delineates brain volumes of interest (VOIs), achieving superior performance with 84.24%±1.44% Dice score, 74.36%±2.40% Jaccard, 84.33%±1.56% precision and 84.73%±1.56% sensitivity. Furthermore, compared with directly using the FreeSurfer toolkit, the proposed method reduced the segmentation time, which only needs 20 seconds to segment the whole brain for each patient. Conclusions: We present a deep learning-based method for the joint segmentation of anatomical and functional PET/MR images. Compared with other single-modality methods, our method greatly improved the accuracy of brain structure delineation, which shows great potential for brain analysis.

19.
Quant Imaging Med Surg ; 13(7): 4365-4379, 2023 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-37456308

RESUMO

Background: Computed tomography (CT) is now universally applied into clinical practice with its non-invasive quality and reliability for lesion detection, which highly improves the diagnostic accuracy of patients with systemic diseases. Although low-dose CT reduces X-ray radiation dose and harm to the human body, it inevitably produces noise and artifacts that are detrimental to information acquisition and medical diagnosis for CT images. Methods: This paper proposes a Wasserstein generative adversarial network (WGAN) with a convolutional block attention module (CBAM) to realize a method of directly synthesizing high-energy CT (HECT) images through low-energy scanning, which greatly reduces X-ray radiation from high-energy scanning. Specifically, our proposed generator structure in WGAN consists of Visual Geometry Group Network (Vgg16), 9 residual blocks, upsampling and CBAM, a subsequent attention block. The convolutional block attention module is integrated into the generator for improving the denoising ability of the network as verified by our ablation comparison experiments. Results: Experimental results of the generator attention module ablation comparison indicate an optimization boost to the overall generator model, obtaining the synthesized high-energy CT with the best metric and denoising effect. In different methods comparison experiments, it can be clearly observed that our proposed method is superior in the peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM) and most of the statistics (average CT value and its standard deviation) compared to other methods. Because P<0.05, the samples are significantly different. The data distribution at the pixel level between the images synthesized by the method in this paper and the high-energy CT images is also most similar. Conclusions: Experimental results indicate that CBAM is able to suppress the noise and artifacts effectively and suggest that the image synthesized by the proposed method is closest to the high-energy CT image in terms of visual perception and objective evaluation metrics.

20.
Comput Methods Programs Biomed ; 237: 107571, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37156020

RESUMO

BACKGROUND: Computed tomography (CT) and magnetic resonance imaging (MRI) are the mainstream imaging technologies for clinical practice. CT imaging can reveal high-quality anatomical and physiopathological structures, especially bone tissue, for clinical diagnosis. MRI provides high resolution in soft tissue and is sensitive to lesions. CT combined with MRI diagnosis has become a regular image-guided radiation treatment plan. METHODS: In this paper, to reduce the dose of radiation exposure in CT examinations and ameliorate the limitations of traditional virtual imaging technologies, we propose a Generative MRI-to-CT transformation method with structural perceptual supervision. Even though structural reconstruction is structurally misaligned in the MRI-CT dataset registration, our proposed method can better align structural information of synthetic CT (sCT) images to input MRI images while simulating the modality of CT in the MRI-to-CT cross-modality transformation. RESULTS: We retrieved a total of 3416 brain MRI-CT paired images as the train/test dataset, including 1366 train images of 10 patients and 2050 test images of 15 patients. Several methods (the baseline methods and the proposed method) were evaluated by the HU difference map, HU distribution, and various similarity metrics, including the mean absolute error (MAE), structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), and normalized cross-correlation (NCC). In our quantitative experimental results, the proposed method achieves the lowest MAE mean of 0.147, highest PSNR mean of 19.27, and NCC mean of 0.431 in the overall CT test dataset. CONCLUSIONS: In conclusion, both qualitative and quantitative results of synthetic CT validate that the proposed method can preserve higher similarity of structural information of the bone tissue of target CT than the baseline methods. Furthermore, the proposed method provides better HU intensity reconstruction for simulating the distribution of the CT modality. The experimental estimation indicates that the proposed method is worth further investigation.


Assuntos
Processamento de Imagem Assistida por Computador , Radioterapia Guiada por Imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA