Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Eur J Nucl Med Mol Imaging ; 49(6): 1833-1842, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-34882262

RESUMEN

PURPOSE: This study aims to compare two approaches using only emission PET data and a convolution neural network (CNN) to correct the attenuation (µ) of the annihilation photons in PET. METHODS: One of the approaches uses a CNN to generate µ-maps from the non-attenuation-corrected (NAC) PET images (µ-CNNNAC). In the other method, CNN is used to improve the accuracy of µ-maps generated using maximum likelihood estimation of activity and attenuation (MLAA) reconstruction (µ-CNNMLAA). We investigated the improvement in the CNN performance by combining the two methods (µ-CNNMLAA+NAC) and the suitability of µ-CNNNAC for providing the scatter distribution required for MLAA reconstruction. Image data from 18F-FDG (n = 100) or 68 Ga-DOTATOC (n = 50) PET/CT scans were used for neural network training and testing. RESULTS: The error of the attenuation correction factors estimated using µ-CT and µ-CNNNAC was over 7%, but that of scatter estimates was only 2.5%, indicating the validity of the scatter estimation from µ-CNNNAC. However, CNNNAC provided less accurate bone structures in the µ-maps, while the best results in recovering the fine bone structures were obtained by applying CNNMLAA+NAC. Additionally, the µ-values in the lungs were overestimated by CNNNAC. Activity images (λ) corrected for attenuation using µ-CNNMLAA and µ-CNNMLAA+NAC were superior to those corrected using µ-CNNNAC, in terms of their similarity to λ-CT. However, the improvement in the similarity with λ-CT by combining the CNNNAC and CNNMLAA approaches was insignificant (percent error for lung cancer lesions, λ-CNNNAC = 5.45% ± 7.88%; λ-CNNMLAA = 1.21% ± 5.74%; λ-CNNMLAA+NAC = 1.91% ± 4.78%; percent error for bone cancer lesions, λ-CNNNAC = 1.37% ± 5.16%; λ-CNNMLAA = 0.23% ± 3.81%; λ-CNNMLAA+NAC = 0.05% ± 3.49%). CONCLUSION: The use of CNNNAC was feasible for scatter estimation to address the chicken-egg dilemma in MLAA reconstruction, but CNNMLAA outperformed CNNNAC.


Asunto(s)
Aprendizaje Profundo , Tomografía Computarizada por Tomografía de Emisión de Positrones , Fluorodesoxiglucosa F18 , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Tomografía de Emisión de Positrones/métodos
2.
EJNMMI Phys ; 10(1): 20, 2023 Mar 22.
Artículo en Inglés | MEDLINE | ID: mdl-36947267

RESUMEN

PURPOSE: Quantitative thyroid single-photon emission computed tomography/computed tomography (SPECT/CT) requires computed tomography (CT)-based attenuation correction and manual thyroid segmentation on CT for %thyroid uptake measurements. Here, we aimed to develop a deep-learning-based CT-free quantitative thyroid SPECT that can generate an attenuation map (µ-map) and automatically segment the thyroid. METHODS: Quantitative thyroid SPECT/CT data (n = 650) were retrospectively analyzed. Typical 3D U-Nets were used for the µ-map generation and automatic thyroid segmentation. Primary emission and scattering SPECTs were inputted to generate a µ-map, and the original µ-map from CT was labeled (268 and 30 for training and validation, respectively). The generated µ-map and primary emission SPECT were inputted for the automatic thyroid segmentation, and the manual thyroid segmentation was labeled (280 and 36 for training and validation, respectively). Other thyroid SPECT/CT (n = 36) and salivary SPECT/CT (n = 29) were employed for verification. RESULTS: The synthetic µ-map demonstrated a strong correlation (R2 = 0.972) and minimum error (mean square error = 0.936 × 10-4, %normalized mean absolute error = 0.999%) of attenuation coefficients when compared to the ground truth (n = 30). Compared to manual segmentation, the automatic thyroid segmentation was excellent with a Dice similarity coefficient of 0.767, minimal thyroid volume difference of - 0.72 mL, and a short 95% Hausdorff distance of 9.416 mm (n = 36). Additionally, %thyroid uptake by synthetic µ-map and automatic thyroid segmentation (CT-free SPECT) was similar to that by the original µ-map and manual thyroid segmentation (SPECT/CT) (3.772 ± 5.735% vs. 3.682 ± 5.516%, p = 0.1090) (n = 36). Furthermore, the synthetic µ-map generation and automatic thyroid segmentation were successfully performed in the salivary SPECT/CT using the deep-learning algorithms trained by thyroid SPECT/CT (n = 29). CONCLUSION: CT-free quantitative SPECT for automatic evaluation of %thyroid uptake can be realized by deep-learning.

3.
Nucl Med Mol Imaging ; 57(2): 86-93, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36998591

RESUMEN

Purpose: Since accurate lung cancer segmentation is required to determine the functional volume of a tumor in [18F]FDG PET/CT, we propose a two-stage U-Net architecture to enhance the performance of lung cancer segmentation using [18F]FDG PET/CT. Methods: The whole-body [18F]FDG PET/CT scan data of 887 patients with lung cancer were retrospectively used for network training and evaluation. The ground-truth tumor volume of interest was drawn using the LifeX software. The dataset was randomly partitioned into training, validation, and test sets. Among the 887 PET/CT and VOI datasets, 730 were used to train the proposed models, 81 were used as the validation set, and the remaining 76 were used to evaluate the model. In Stage 1, the global U-net receives 3D PET/CT volume as input and extracts the preliminary tumor area, generating a 3D binary volume as output. In Stage 2, the regional U-net receives eight consecutive PET/CT slices around the slice selected by the Global U-net in Stage 1 and generates a 2D binary image as the output. Results: The proposed two-stage U-Net architecture outperformed the conventional one-stage 3D U-Net in primary lung cancer segmentation. The two-stage U-Net model successfully predicted the detailed margin of the tumors, which was determined by manually drawing spherical VOIs and applying an adaptive threshold. Quantitative analysis using the Dice similarity coefficient confirmed the advantages of the two-stage U-Net. Conclusion: The proposed method will be useful for reducing the time and effort required for accurate lung cancer segmentation in [18F]FDG PET/CT.

4.
Phys Med Biol ; 66(11)2021 05 20.
Artículo en Inglés | MEDLINE | ID: mdl-33910170

RESUMEN

We propose a deep learning-based data-driven respiratory phase-matched gated-PET attenuation correction (AC) method that does not need a gated-CT. The proposed method is a multi-step process that consists of data-driven respiratory gating, gated attenuation map estimation using maximum-likelihood reconstruction of attenuation and activity (MLAA) algorithm, and enhancement of the gated attenuation maps using convolutional neural network (CNN). The gated MLAA attenuation maps enhanced by the CNN allowed for the phase-matched AC of gated-PET images. We conducted a non-rigid registration of the gated-PET images to generate motion-free PET images. We trained the CNN by conducting a 3D patch-based learning with 80 oncologic whole-body18F-fluorodeoxyglucose (18F-FDG) PET/CT scan data and applied it to seven regional PET/CT scans that cover the lower lung and upper liver. We investigated the impact of the proposed respiratory phase-matched AC of PET without utilizing CT on tumor size and standard uptake value (SUV) assessment, and PET image quality (%STD). The attenuation corrected gated and motion-free PET images generated using the proposed method yielded sharper organ boundaries and better noise characteristics than conventional gated and ungated PET images. A banana artifact observed in a phase-mismatched CT-based AC was not observed in the proposed approach. By employing the proposed method, the size of tumor was reduced by 12.3% and SUV90%was increased by 13.3% in tumors with larger movements than 5 mm. %STD of liver uptake was reduced by 11.1%. The deep learning-based data-driven respiratory phase-matched AC method improved the PET image quality and reduced the motion artifacts.


Asunto(s)
Artefactos , Tomografía Computarizada por Tomografía de Emisión de Positrones , Algoritmos , Fluorodesoxiglucosa F18 , Procesamiento de Imagen Asistido por Computador , Movimiento , Tomografía de Emisión de Positrones
5.
Nucl Med Mol Imaging ; 54(6): 299-304, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-33282001

RESUMEN

PURPOSE: Early deep-learning-based image denoising techniques mainly focused on a fully supervised model that learns how to generate a clean image from the noisy input (noise2clean: N2C). The aim of this study is to explore the feasibility of the self-supervised methods (noise2noise: N2N and noiser2noise: Nr2N) for PET image denoising based on the measured PET data sets by comparing their performance with the conventional N2C model. METHODS: For training and evaluating the networks, 18F-FDG brain PET/CT scan data of 14 patients was retrospectively used (10 for training and 4 for testing). From the 60-min list-mode data, we generated a total of 100 data bins with 10-s duration. We also generated 40-s-long data by adding four non-overlapping 10-s bins and 300-s-long reference data by adding all list-mode data. We employed U-Net that is widely used for various tasks in biomedical imaging to train and test proposed denoising models. RESULTS: All the N2C, N2N, and Nr2N were effective for improving the noisy inputs. While N2N showed equivalent PSNR to the N2C in all the noise levels, Nr2N yielded higher SSIM than N2N. N2N yielded denoised images similar to reference image with Gaussian filtering regardless of input noise level. Image contrast was better in the N2N results. CONCLUSION: The self-supervised denoising method will be useful for reducing the PET scan time or radiation dose.

6.
Sci Rep ; 9(1): 10308, 2019 07 16.
Artículo en Inglés | MEDLINE | ID: mdl-31311963

RESUMEN

Personalized dosimetry with high accuracy is crucial owing to the growing interests in personalized medicine. The direct Monte Carlo simulation is considered as a state-of-art voxel-based dosimetry technique; however, it incurs an excessive computational cost and time. To overcome the limitations of the direct Monte Carlo approach, we propose using a deep convolutional neural network (CNN) for the voxel dose prediction. PET and CT image patches were used as inputs for the CNN with the given ground truth from direct Monte Carlo. The predicted voxel dose rate maps from the CNN were compared with the ground truth and dose rate maps generated voxel S-value (VSV) kernel convolution method, which is one of the common voxel-based dosimetry techniques. The CNN-based dose rate map agreed well with the ground truth with voxel dose rate errors of 2.54% ± 2.09%. The VSV kernel approach showed a voxel error of 9.97% ± 1.79%. In the whole-body dosimetry study, the average organ absorbed dose errors were 1.07%, 9.43%, and 34.22% for the CNN, VSV, and OLINDA/EXM dosimetry software, respectively. The proposed CNN-based dosimetry method showed improvements compared to the conventional dosimetry approaches and showed results comparable with that of the direct Monte Carlo simulation with significantly lower calculation time.

7.
J Nucl Med ; 60(8): 1183-1189, 2019 08.
Artículo en Inglés | MEDLINE | ID: mdl-30683763

RESUMEN

We propose a new deep learning-based approach to provide more accurate whole-body PET/MRI attenuation correction than is possible with the Dixon-based 4-segment method. We use activity and attenuation maps estimated using the maximum-likelihood reconstruction of activity and attenuation (MLAA) algorithm as inputs to a convolutional neural network (CNN) to learn a CT-derived attenuation map. Methods: The whole-body 18F-FDG PET/CT scan data of 100 cancer patients (38 men and 62 women; age, 57.3 ± 14.1 y) were retrospectively used for training and testing the CNN. A modified U-net was trained to predict a CT-derived µ-map (µ-CT) from the MLAA-generated activity distribution (λ-MLAA) and µ-map (µ-MLAA). We used 1.3 million patches derived from 60 patients' data for training the CNN, data of 20 others were used as a validation set to prevent overfitting, and the data of the other 20 were used as a test set for the CNN performance analysis. The attenuation maps generated using the proposed method (µ-CNN), µ-MLAA, and 4-segment method (µ-segment) were compared with the µ-CT, a ground truth. We also compared the voxelwise correlation between the activity images reconstructed using ordered-subset expectation maximization with the µ-maps, and the SUVs of primary and metastatic bone lesions obtained by drawing regions of interest on the activity images. Results: The CNN generates less noisy attenuation maps and achieves better bone identification than MLAA. The average Dice similarity coefficient for bone regions between µ-CNN and µ-CT was 0.77, which was significantly higher than that between µ-MLAA and µ-CT (0.36). Also, the CNN result showed the best pixel-by-pixel correlation with the CT-based results and remarkably reduced differences in activity maps in comparison to CT-based attenuation correction. Conclusion: The proposed deep neural network produced a more reliable attenuation map for 511-keV photons than the 4-segment method currently used in whole-body PET/MRI studies.


Asunto(s)
Mapeo Encefálico , Fluorodesoxiglucosa F18/farmacología , Imagen por Resonancia Magnética , Neoplasias/diagnóstico por imagen , Tomografía de Emisión de Positrones , Adulto , Anciano , Algoritmos , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagenología Tridimensional , Masculino , Metales , Persona de Mediana Edad , Imagen Multimodal , Redes Neurales de la Computación , Reproducibilidad de los Resultados , Estudios Retrospectivos , Imagen de Cuerpo Entero
8.
J Nucl Med ; 59(10): 1624-1629, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-29449446

RESUMEN

Simultaneous reconstruction of activity and attenuation using the maximum-likelihood reconstruction of activity and attenuation (MLAA) augmented by time-of-flight information is a promising method for PET attenuation correction. However, it still suffers from several problems, including crosstalk artifacts, slow convergence speed, and noisy attenuation maps (µ-maps). In this work, we developed deep convolutional neural networks (CNNs) to overcome these MLAA limitations, and we verified their feasibility using a clinical brain PET dataset. Methods: We applied the proposed method to one of the most challenging PET cases for simultaneous image reconstruction (18F-fluorinated-N-3-fluoropropyl-2-ß-carboxymethoxy-3-ß-(4-iodophenyl)nortropane [18F-FP-CIT] PET scans with highly specific binding to striatum of the brain). Three different CNN architectures (convolutional autoencoder [CAE], Unet, and Hybrid of CAE) were designed and trained to learn a CT-derived µ-map (µ-CT) from the MLAA-generated activity distribution and µ-map (µ-MLAA). The PET/CT data of 40 patients with suspected Parkinson disease were used for 5-fold cross-validation. For the training of CNNs, 800,000 transverse PET and CT slices augmented from 32 patient datasets were used. The similarity to µ-CT of the CNN-generated µ-maps (µ-CAE, µ-Unet, and µ-Hybrid) and µ-MLAA was compared using Dice similarity coefficients. In addition, we compared the activity concentration of specific (striatum) and nonspecific (cerebellum and occipital cortex) binding regions and the binding ratios in the striatum in the PET activity images reconstructed using those µ-maps. Results: The CNNs generated less noisy and more uniform µ-maps than the original µ-MLAA. Moreover, the air cavities and bones were better resolved in the proposed CNN outputs. In addition, the proposed deep learning approach was useful for mitigating the crosstalk problem in the MLAA reconstruction. The Hybrid network of CAE and Unet yielded the most similar µ-maps to µ-CT (Dice similarity coefficient in the whole head = 0.79 in the bone and 0.72 in air cavities), resulting in only about a 5% error in activity and binding ratio quantification. Conclusion: The proposed deep learning approach is promising for accurate attenuation correction of activity distribution in time-of-flight PET systems.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Anciano , Encéfalo/diagnóstico por imagen , Encéfalo/metabolismo , Dopamina/metabolismo , Femenino , Humanos , Masculino , Tomografía Computarizada por Tomografía de Emisión de Positrones , Factores de Tiempo
9.
Phys Med Biol ; 63(14): 145011, 2018 07 16.
Artículo en Inglés | MEDLINE | ID: mdl-29923839

RESUMEN

The objective of this study is to develop a convolutional neural network (CNN) for computed tomography (CT) image super-resolution. The network learns an end-to-end mapping between low (thick-slice thickness) and high (thin-slice thickness) resolution images using the modified U-Net. To verify the proposed method, we train and test the CNN using axially averaged data of existing thin-slice CT images as input and their middle slice as the label. Fifty-two CT studies are used as the CNN training set, and 13 CT studies are used as the test set. We perform five-fold cross-validation to confirm the performance consistency. Because all input and output images are used in two-dimensional slice format, the total number of slices for training the CNN is 7670. We assess the performance of the proposed method with respect to the resolution and contrast, as well as the noise properties. The CNN generates output images that are virtually equivalent to the ground truth. The most remarkable image-recovery improvement by the CNN is deblurring of boundaries of bone structures and air cavities. The CNN output yields an approximately 10% higher peak signal-to-noise ratio and lower normalized root mean square error than the input (thicker slices). The CNN output noise level is lower than the ground truth and equivalent to the iterative image reconstruction result. The proposed deep learning method is useful for both super-resolution and de-noising.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Enfermedad de Parkinson/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Anciano , Femenino , Humanos , Masculino , Dosis de Radiación , Relación Señal-Ruido
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA