Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
1.
Br J Radiol ; 97(1155): 632-639, 2024 Feb 28.
Artículo en Inglés | MEDLINE | ID: mdl-38265235

RESUMEN

OBJECTIVES: To develop and validate a super-resolution (SR) algorithm generating clinically feasible chest radiographs from 64-fold reduced data. METHODS: An SR convolutional neural network was trained to produce original-resolution images (output) from 64-fold reduced images (input) using 128 × 128 patches (n = 127 030). For validation, 112 radiographs-including those with pneumothorax (n = 17), nodules (n = 20), consolidations (n = 18), and ground-glass opacity (GGO; n = 16)-were collected. Three image sets were prepared: the original images and those reconstructed using SR and conventional linear interpolation (LI) using 64-fold reduced data. The mean-squared error (MSE) was calculated to measure similarity between the reconstructed and original images, and image noise was quantified. Three thoracic radiologists evaluated the quality of each image and decided whether any abnormalities were present. RESULTS: The SR-images were more similar to the original images than the LI-reconstructed images (MSE: 9269 ± 1015 vs. 9429 ± 1057; P = .02). The SR-images showed lower measured noise and scored better noise level by three radiologists than both original and LI-reconstructed images (Ps < .01). The radiologists' pooled sensitivity with the SR-reconstructed images was not significantly different compared with the original images for detecting pneumothorax (SR vs. original, 90.2% [46/51] vs. 96.1% [49/51]; P = .19), nodule (90.0% [54/60] vs. 85.0% [51/60]; P = .26), consolidation (100% [54/54] vs. 96.3% [52/54]; P = .50), and GGO (91.7% [44/48] vs. 95.8% [46/48]; P = .69). CONCLUSIONS: SR-reconstructed chest radiographs using 64-fold reduced data showed a lower noise level than the original images, with equivalent sensitivity for detecting major abnormalities. ADVANCES IN KNOWLEDGE: This is the first study applying super-resolution in data reduction of chest radiographs.


Asunto(s)
Enfermedades Pulmonares , Neumotórax , Humanos , Neumotórax/diagnóstico por imagen , Redes Neurales de la Computación , Radiografía , Algoritmos
2.
Nucl Med Mol Imaging ; 57(2): 86-93, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36998591

RESUMEN

Purpose: Since accurate lung cancer segmentation is required to determine the functional volume of a tumor in [18F]FDG PET/CT, we propose a two-stage U-Net architecture to enhance the performance of lung cancer segmentation using [18F]FDG PET/CT. Methods: The whole-body [18F]FDG PET/CT scan data of 887 patients with lung cancer were retrospectively used for network training and evaluation. The ground-truth tumor volume of interest was drawn using the LifeX software. The dataset was randomly partitioned into training, validation, and test sets. Among the 887 PET/CT and VOI datasets, 730 were used to train the proposed models, 81 were used as the validation set, and the remaining 76 were used to evaluate the model. In Stage 1, the global U-net receives 3D PET/CT volume as input and extracts the preliminary tumor area, generating a 3D binary volume as output. In Stage 2, the regional U-net receives eight consecutive PET/CT slices around the slice selected by the Global U-net in Stage 1 and generates a 2D binary image as the output. Results: The proposed two-stage U-Net architecture outperformed the conventional one-stage 3D U-Net in primary lung cancer segmentation. The two-stage U-Net model successfully predicted the detailed margin of the tumors, which was determined by manually drawing spherical VOIs and applying an adaptive threshold. Quantitative analysis using the Dice similarity coefficient confirmed the advantages of the two-stage U-Net. Conclusion: The proposed method will be useful for reducing the time and effort required for accurate lung cancer segmentation in [18F]FDG PET/CT.

3.
J Nucl Med ; 64(4): 659-666, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36328490

RESUMEN

This paper proposes a novel method for automatic quantification of amyloid PET using deep learning-based spatial normalization (SN) of PET images, which does not require MRI or CT images of the same patient. The accuracy of the method was evaluated for 3 different amyloid PET radiotracers compared with MRI-parcellation-based PET quantification using FreeSurfer. Methods: A deep neural network model used for the SN of amyloid PET images was trained using 994 multicenter amyloid PET images (367 18F-flutemetamol and 627 18F-florbetaben) and the corresponding 3-dimensional MR images of subjects who had Alzheimer disease or mild cognitive impairment or were cognitively normal. For comparison, PET SN was also conducted using version 12 of the Statistical Parametric Mapping program (SPM-based SN). The accuracy of deep learning-based and SPM-based SN and SUV ratio quantification relative to the FreeSurfer-based estimation in individual brain spaces was evaluated using 148 other amyloid PET images (64 18F-flutemetamol and 84 18F-florbetaben). Additional external validation was performed using an unseen independent external dataset (30 18F-flutemetamol, 67 18F-florbetaben, and 39 18F-florbetapir). Results: Quantification results using the proposed deep learning-based method showed stronger correlations with the FreeSurfer estimates than SPM-based SN using MRI did. For example, the slope, y-intercept, and R 2 values between SPM and FreeSurfer for the global cortex were 0.869, 0.113, and 0.946, respectively. In contrast, the slope, y-intercept, and R 2 values between the proposed deep learning-based method and FreeSurfer were 1.019, -0.016, and 0.986, respectively. The external validation study also demonstrated better performance for the proposed method without MR images than for SPM with MRI. In most brain regions, the proposed method outperformed SPM SN in terms of linear regression parameters and intraclass correlation coefficients. Conclusion: We evaluated a novel deep learning-based SN method that allows quantitative analysis of amyloid brain PET images without structural MRI. The quantification results using the proposed method showed a strong correlation with MRI-parcellation-based quantification using FreeSurfer for all clinical amyloid radiotracers. Therefore, the proposed method will be useful for investigating Alzheimer disease and related brain disorders using amyloid PET scans.


Asunto(s)
Enfermedad de Alzheimer , Humanos , Enfermedad de Alzheimer/diagnóstico por imagen , Compuestos de Anilina , Encéfalo/diagnóstico por imagen , Amiloide , Proteínas Amiloidogénicas , Tomografía de Emisión de Positrones/métodos , Redes Neurales de la Computación , Imagen por Resonancia Magnética/métodos
5.
Eur J Nucl Med Mol Imaging ; 49(9): 3061-3072, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35226120

RESUMEN

PURPOSE: Alzheimer's disease (AD) studies revealed that abnormal deposition of tau spreads in a specific spatial pattern, namely Braak stage. However, Braak staging is based on post mortem brains, each of which represents the cross section of the tau trajectory in disease progression, and numerous studies were reported that do not conform to that model. This study thus aimed to identify the tau trajectory and quantify the tau progression in a data-driven approach with the continuous latent space learned by variational autoencoder (VAE). METHODS: A total of 1080 [18F]Flortaucipir brain positron emission tomography (PET) images were collected from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. VAE was built to compress the hidden features from tau images in latent space. Hierarchical agglomerative clustering and minimum spanning tree (MST) were applied to organize the features and calibrate them to the tau progression, thus deriving pseudo-time. The image-level tau trajectory was inferred by continuously sampling across the calibrated latent features. We assessed the pseudo-time with regard to tau standardized uptake value ratio (SUVr) in AD-vulnerable regions, amyloid deposit, glucose metabolism, cognitive scores, and clinical diagnosis. RESULTS: We identified four clusters that plausibly capture certain stages of AD and organized the clusters in the latent space. The inferred tau trajectory agreed with the Braak staging. According to the derived pseudo-time, tau first deposits in the parahippocampal and amygdala, and then spreads to the fusiform, inferior temporal lobe, and posterior cingulate. Prior to the regional tau deposition, amyloid accumulates first. CONCLUSION: The spatiotemporal trajectory of tau progression inferred in this study was consistent with Braak staging. The profile of other biomarkers in disease progression agreed well with previous findings. We addressed that this approach additionally has the potential to quantify tau progression as a continuous variable by taking a whole-brain tau image into account.


Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Enfermedad de Alzheimer/metabolismo , Encéfalo/metabolismo , Carbolinas , Disfunción Cognitiva/metabolismo , Progresión de la Enfermedad , Humanos , Tomografía de Emisión de Positrones/métodos , Proteínas tau/metabolismo
6.
Eur J Nucl Med Mol Imaging ; 49(6): 1833-1842, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-34882262

RESUMEN

PURPOSE: This study aims to compare two approaches using only emission PET data and a convolution neural network (CNN) to correct the attenuation (µ) of the annihilation photons in PET. METHODS: One of the approaches uses a CNN to generate µ-maps from the non-attenuation-corrected (NAC) PET images (µ-CNNNAC). In the other method, CNN is used to improve the accuracy of µ-maps generated using maximum likelihood estimation of activity and attenuation (MLAA) reconstruction (µ-CNNMLAA). We investigated the improvement in the CNN performance by combining the two methods (µ-CNNMLAA+NAC) and the suitability of µ-CNNNAC for providing the scatter distribution required for MLAA reconstruction. Image data from 18F-FDG (n = 100) or 68 Ga-DOTATOC (n = 50) PET/CT scans were used for neural network training and testing. RESULTS: The error of the attenuation correction factors estimated using µ-CT and µ-CNNNAC was over 7%, but that of scatter estimates was only 2.5%, indicating the validity of the scatter estimation from µ-CNNNAC. However, CNNNAC provided less accurate bone structures in the µ-maps, while the best results in recovering the fine bone structures were obtained by applying CNNMLAA+NAC. Additionally, the µ-values in the lungs were overestimated by CNNNAC. Activity images (λ) corrected for attenuation using µ-CNNMLAA and µ-CNNMLAA+NAC were superior to those corrected using µ-CNNNAC, in terms of their similarity to λ-CT. However, the improvement in the similarity with λ-CT by combining the CNNNAC and CNNMLAA approaches was insignificant (percent error for lung cancer lesions, λ-CNNNAC = 5.45% ± 7.88%; λ-CNNMLAA = 1.21% ± 5.74%; λ-CNNMLAA+NAC = 1.91% ± 4.78%; percent error for bone cancer lesions, λ-CNNNAC = 1.37% ± 5.16%; λ-CNNMLAA = 0.23% ± 3.81%; λ-CNNMLAA+NAC = 0.05% ± 3.49%). CONCLUSION: The use of CNNNAC was feasible for scatter estimation to address the chicken-egg dilemma in MLAA reconstruction, but CNNMLAA outperformed CNNNAC.


Asunto(s)
Aprendizaje Profundo , Tomografía Computarizada por Tomografía de Emisión de Positrones , Fluorodesoxiglucosa F18 , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Tomografía de Emisión de Positrones/métodos
7.
Biomed Eng Lett ; 11(3): 263-271, 2021 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-34350052

RESUMEN

Although MR-guided radiotherapy (MRgRT) is advancing rapidly, generating accurate synthetic CT (sCT) from MRI is still challenging. Previous approaches using deep neural networks require large dataset of precisely co-registered CT and MRI pairs that are difficult to obtain due to respiration and peristalsis. Here, we propose a method to generate sCT based on deep learning training with weakly paired CT and MR images acquired from an MRgRT system using a cycle-consistent GAN (CycleGAN) framework that allows the unpaired image-to-image translation in abdomen and thorax. Data from 90 cancer patients who underwent MRgRT were retrospectively used. CT images of the patients were aligned to the corresponding MR images using deformable registration, and the deformed CT (dCT) and MRI pairs were used for network training and testing. The 2.5D CycleGAN was constructed to generate sCT from the MRI input. To improve the sCT generation performance, a perceptual loss that explores the discrepancy between high-dimensional representations of images extracted from a well-trained classifier was incorporated into the CycleGAN. The CycleGAN with perceptual loss outperformed the U-net in terms of errors and similarities between sCT and dCT, and dose estimation for treatment planning of thorax, and abdomen. The sCT generated using CycleGAN produced virtually identical dose distribution maps and dose-volume histograms compared to dCT. CycleGAN with perceptual loss outperformed U-net in sCT generation when trained with weakly paired dCT-MRI for MRgRT. The proposed method will be useful to increase the treatment accuracy of MR-only or MR-guided adaptive radiotherapy. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s13534-021-00195-8.

8.
Phys Med Biol ; 66(11)2021 05 20.
Artículo en Inglés | MEDLINE | ID: mdl-33910170

RESUMEN

We propose a deep learning-based data-driven respiratory phase-matched gated-PET attenuation correction (AC) method that does not need a gated-CT. The proposed method is a multi-step process that consists of data-driven respiratory gating, gated attenuation map estimation using maximum-likelihood reconstruction of attenuation and activity (MLAA) algorithm, and enhancement of the gated attenuation maps using convolutional neural network (CNN). The gated MLAA attenuation maps enhanced by the CNN allowed for the phase-matched AC of gated-PET images. We conducted a non-rigid registration of the gated-PET images to generate motion-free PET images. We trained the CNN by conducting a 3D patch-based learning with 80 oncologic whole-body18F-fluorodeoxyglucose (18F-FDG) PET/CT scan data and applied it to seven regional PET/CT scans that cover the lower lung and upper liver. We investigated the impact of the proposed respiratory phase-matched AC of PET without utilizing CT on tumor size and standard uptake value (SUV) assessment, and PET image quality (%STD). The attenuation corrected gated and motion-free PET images generated using the proposed method yielded sharper organ boundaries and better noise characteristics than conventional gated and ungated PET images. A banana artifact observed in a phase-mismatched CT-based AC was not observed in the proposed approach. By employing the proposed method, the size of tumor was reduced by 12.3% and SUV90%was increased by 13.3% in tumors with larger movements than 5 mm. %STD of liver uptake was reduced by 11.1%. The deep learning-based data-driven respiratory phase-matched AC method improved the PET image quality and reduced the motion artifacts.


Asunto(s)
Artefactos , Tomografía Computarizada por Tomografía de Emisión de Positrones , Algoritmos , Fluorodesoxiglucosa F18 , Procesamiento de Imagen Asistido por Computador , Movimiento , Tomografía de Emisión de Positrones
9.
Phys Med Biol ; 66(9)2021 04 27.
Artículo en Inglés | MEDLINE | ID: mdl-33780912

RESUMEN

Advances in simultaneous positron emission tomography/magnetic resonance imaging (PET/MRI) technology have led to an active investigation of the anatomy-guided regularized PET image reconstruction algorithm based on MR images. Among the various priors proposed for anatomy-guided regularized PET image reconstruction, Bowsher's method based on second-order smoothing priors sometimes suffers from over-smoothing of detailed structures. Therefore, in this study, we propose a Bowsher prior based on thel1-norm and an iteratively reweighting scheme to overcome the limitation of the original Bowsher method. In addition, we have derived a closed solution for iterative image reconstruction based on this non-smooth prior. A comparison study between the originall2and proposedl1Bowsher priors was conducted using computer simulation and real human data. In the simulation and real data application, small lesions with abnormal PET uptake were better detected by the proposedl1Bowsher prior methods than the original Bowsher prior. The originall2Bowsher leads to a decreased PET intensity in small lesions when there is no clear separation between the lesions and surrounding tissue in the anatomical prior. However, the proposedl1Bowsher prior methods showed better contrast between the tumors and surrounding tissues owing to the intrinsic edge-preserving property of the prior which is attributed to the sparseness induced byl1-norm, especially in the iterative reweighting scheme. Besides, the proposed methods demonstrated lower bias and less hyper-parameter dependency on PET intensity estimation in the regions with matched anatomical boundaries in PET and MRI. Therefore, these methods will be useful for improving the PET image quality based on the anatomical side information.


Asunto(s)
Tomografía de Emisión de Positrones , Algoritmos , Simulación por Computador , Humanos , Fantasmas de Imagen
10.
Neuroimage ; 232: 117890, 2021 05 15.
Artículo en Inglés | MEDLINE | ID: mdl-33617991

RESUMEN

It is challenging to compare amyloid PET images obtained with different radiotracers. Here, we introduce a new approach to improve the interchangeability of amyloid PET acquired with different radiotracers through image-level translation. Deep generative networks were developed using unpaired PET datasets, consisting of 203 [11C]PIB and 850 [18F]florbetapir brain PET images. Using 15 paired PET datasets, the standardized uptake value ratio (SUVR) values obtained from pseudo-PIB or pseudo-florbetapir PET images translated using the generative networks was compared to those obtained from the original images. The generated amyloid PET images showed similar distribution patterns with original amyloid PET of different radiotracers. The SUVR obtained from the original [18F]florbetapir PET was lower than those obtained from the original [11C]PIB PET. The translated amyloid PET images reduced the difference in SUVR. The SUVR obtained from the pseudo-PIB PET images generated from [18F]florbetapir PET showed a good agreement with those of the original PIB PET (ICC = 0.87 for global SUVR). The SUVR obtained from the pseudo-florbetapir PET also showed a good agreement with those of the original [18F]florbetapir PET (ICC = 0.85 for global SUVR). The ICC values between the original and generated PET images were higher than those between original [11C]PIB and [18F]florbetapir images (ICC = 0.65 for global SUVR). Our approach provides the image-level translation of amyloid PET images obtained using different radiotracers. It may facilitate the clinical studies designed with variable amyloid PET images due to long-term clinical follow-up as well as multicenter trials by enabling the translation of different types of amyloid PET.


Asunto(s)
Amiloide/metabolismo , Compuestos de Anilina/metabolismo , Encéfalo/metabolismo , Aprendizaje Profundo , Tomografía de Emisión de Positrones/métodos , Estilbenos/metabolismo , Tiazoles/metabolismo , Anciano , Anciano de 80 o más Años , Encéfalo/diagnóstico por imagen , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Masculino , Radiofármacos/metabolismo
11.
Sci Rep ; 11(1): 1673, 2021 01 18.
Artículo en Inglés | MEDLINE | ID: mdl-33462321

RESUMEN

The detailed anatomical information of the brain provided by 3D magnetic resonance imaging (MRI) enables various neuroscience research. However, due to the long scan time for 3D MR images, 2D images are mainly obtained in clinical environments. The purpose of this study is to generate 3D images from a sparsely sampled 2D images using an inpainting deep neural network that has a U-net-like structure and DenseNet sub-blocks. To train the network, not only fidelity loss but also perceptual loss based on the VGG network were considered. Various methods were used to assess the overall similarity between the inpainted and original 3D data. In addition, morphological analyzes were performed to investigate whether the inpainted data produced local features similar to the original 3D data. The diagnostic ability using the inpainted data was also evaluated by investigating the pattern of morphological changes in disease groups. Brain anatomy details were efficiently recovered by the proposed neural network. In voxel-based analysis to assess gray matter volume and cortical thickness, differences between the inpainted data and the original 3D data were observed only in small clusters. The proposed method will be useful for utilizing advanced neuroimaging techniques with 2D MRI data.


Asunto(s)
Encéfalo/anatomía & histología , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Neuroimagen/métodos , Anciano , Encéfalo/diagnóstico por imagen , Femenino , Humanos , Masculino , Redes Neurales de la Computación
12.
Mol Psychiatry ; 26(7): 3476-3488, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-32929214

RESUMEN

Although antipsychotic drugs are effective for relieving the psychotic symptoms of first-episode psychosis (FEP), psychotic relapse is common during the course of the illness. While some FEPs remain remitted even without medication, antipsychotic discontinuation is regarded as the most common risk factor for the relapse. Considering the actions of antipsychotic drugs on presynaptic and postsynaptic dopamine dysregulation, this study evaluated possible mechanisms underlying relapse after antipsychotic discontinuation. Twenty five FEPs who were clinically stable and 14 matched healthy controls were enrolled. Striatal dopamine activity was assessed as Kicer value using [18F]DOPA PET before and 6 weeks after antipsychotic discontinuation. The D2/3 receptor availability was measured as BPND using [11C]raclopride PET after antipsychotic discontinuation. Healthy controls also underwent PET scans according to the corresponding schedule of the patients. Patients were monitored for psychotic relapse during 12 weeks after antipsychotic discontinuation. 40% of the patients showed psychotic relapse after antipsychotic discontinuation. The change in Kicer value over time significantly differed between relapsed, non-relapsed patients and healthy controls (Week*Group: F = 4.827, df = 2,253.193, p = 0.009). In relapsed patients, a significant correlation was found between baseline striatal Kicer values and time to relapse after antipsychotic discontinuation (R2 = 0.518, p = 0.018). BPND were not significantly different between relapsed, non-relapsed patients and healthy controls (F = 1.402, df = 2,32.000, p = 0.261). These results suggest that dysfunctional dopamine autoregulation might precipitate psychotic relapse after antipsychotic discontinuation in FEP. This finding could be used for developing a strategy for the prevention of psychotic relapse related to antipsychotic discontinuation.


Asunto(s)
Antipsicóticos , Trastornos Psicóticos , Antipsicóticos/uso terapéutico , Dihidroxifenilalanina , Dopamina/uso terapéutico , Humanos , Tomografía de Emisión de Positrones , Trastornos Psicóticos/diagnóstico por imagen , Trastornos Psicóticos/tratamiento farmacológico , Racloprida , Recurrencia
13.
Nucl Med Mol Imaging ; 54(6): 299-304, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-33282001

RESUMEN

PURPOSE: Early deep-learning-based image denoising techniques mainly focused on a fully supervised model that learns how to generate a clean image from the noisy input (noise2clean: N2C). The aim of this study is to explore the feasibility of the self-supervised methods (noise2noise: N2N and noiser2noise: Nr2N) for PET image denoising based on the measured PET data sets by comparing their performance with the conventional N2C model. METHODS: For training and evaluating the networks, 18F-FDG brain PET/CT scan data of 14 patients was retrospectively used (10 for training and 4 for testing). From the 60-min list-mode data, we generated a total of 100 data bins with 10-s duration. We also generated 40-s-long data by adding four non-overlapping 10-s bins and 300-s-long reference data by adding all list-mode data. We employed U-Net that is widely used for various tasks in biomedical imaging to train and test proposed denoising models. RESULTS: All the N2C, N2N, and Nr2N were effective for improving the noisy inputs. While N2N showed equivalent PSNR to the N2C in all the noise levels, Nr2N yielded higher SSIM than N2N. N2N yielded denoised images similar to reference image with Gaussian filtering regardless of input noise level. Image contrast was better in the N2N results. CONCLUSION: The self-supervised denoising method will be useful for reducing the PET scan time or radiation dose.

14.
Phys Med ; 72: 60-72, 2020 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-32200299

RESUMEN

In positron emission tomography (PET) studies, the voxel-wise calculation of individual rate constants describing the tracer kinetics is quite challenging because of the nonlinear relationship between the rate constants and PET data and the high noise level in voxel data. Based on preliminary simulations using a standard two-tissue compartment model, we can hypothesize that it is possible to reduce errors in the rate constant estimates when constraining the overestimation of the larger of two exponents in the model equation. We thus propose a novel approach based on infinity-norm regularization for limiting this exponent. Owing to the non-smooth cost function of this regularization scheme, which prevents the use of conventional Jacobian-based optimization methods, we examined a proximal gradient algorithm and the particle swarm optimization (PSO) through a simulation study. Because it exploits multiple initial values, the PSO method shows much better convergence than the proximal gradient algorithm, which is susceptible to the initial values. In the implementation of PSO, the use of a Gamma distribution to govern random movements was shown to improve the convergence rate and stability compared to a uniform distribution. Consequently, Gamma-based PSO with regularization was shown to outperform all other methods tested, including the conventional basis function method and Levenberg-Marquardt algorithm, in terms of its statistical properties.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Dinámicas no Lineales , Tomografía Computarizada por Tomografía de Emisión de Positrones , Animales , Fluorodesoxiglucosa F18 , Cinética , Masculino , Ratones , Ratones Endogámicos C57BL
15.
J Nucl Med ; 60(8): 1183-1189, 2019 08.
Artículo en Inglés | MEDLINE | ID: mdl-30683763

RESUMEN

We propose a new deep learning-based approach to provide more accurate whole-body PET/MRI attenuation correction than is possible with the Dixon-based 4-segment method. We use activity and attenuation maps estimated using the maximum-likelihood reconstruction of activity and attenuation (MLAA) algorithm as inputs to a convolutional neural network (CNN) to learn a CT-derived attenuation map. Methods: The whole-body 18F-FDG PET/CT scan data of 100 cancer patients (38 men and 62 women; age, 57.3 ± 14.1 y) were retrospectively used for training and testing the CNN. A modified U-net was trained to predict a CT-derived µ-map (µ-CT) from the MLAA-generated activity distribution (λ-MLAA) and µ-map (µ-MLAA). We used 1.3 million patches derived from 60 patients' data for training the CNN, data of 20 others were used as a validation set to prevent overfitting, and the data of the other 20 were used as a test set for the CNN performance analysis. The attenuation maps generated using the proposed method (µ-CNN), µ-MLAA, and 4-segment method (µ-segment) were compared with the µ-CT, a ground truth. We also compared the voxelwise correlation between the activity images reconstructed using ordered-subset expectation maximization with the µ-maps, and the SUVs of primary and metastatic bone lesions obtained by drawing regions of interest on the activity images. Results: The CNN generates less noisy attenuation maps and achieves better bone identification than MLAA. The average Dice similarity coefficient for bone regions between µ-CNN and µ-CT was 0.77, which was significantly higher than that between µ-MLAA and µ-CT (0.36). Also, the CNN result showed the best pixel-by-pixel correlation with the CT-based results and remarkably reduced differences in activity maps in comparison to CT-based attenuation correction. Conclusion: The proposed deep neural network produced a more reliable attenuation map for 511-keV photons than the 4-segment method currently used in whole-body PET/MRI studies.


Asunto(s)
Mapeo Encefálico , Fluorodesoxiglucosa F18/farmacología , Imagen por Resonancia Magnética , Neoplasias/diagnóstico por imagen , Tomografía de Emisión de Positrones , Adulto , Anciano , Algoritmos , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagenología Tridimensional , Masculino , Metales , Persona de Mediana Edad , Imagen Multimodal , Redes Neurales de la Computación , Reproducibilidad de los Resultados , Estudios Retrospectivos , Imagen de Cuerpo Entero
16.
Phys Med Biol ; 63(14): 145011, 2018 07 16.
Artículo en Inglés | MEDLINE | ID: mdl-29923839

RESUMEN

The objective of this study is to develop a convolutional neural network (CNN) for computed tomography (CT) image super-resolution. The network learns an end-to-end mapping between low (thick-slice thickness) and high (thin-slice thickness) resolution images using the modified U-Net. To verify the proposed method, we train and test the CNN using axially averaged data of existing thin-slice CT images as input and their middle slice as the label. Fifty-two CT studies are used as the CNN training set, and 13 CT studies are used as the test set. We perform five-fold cross-validation to confirm the performance consistency. Because all input and output images are used in two-dimensional slice format, the total number of slices for training the CNN is 7670. We assess the performance of the proposed method with respect to the resolution and contrast, as well as the noise properties. The CNN generates output images that are virtually equivalent to the ground truth. The most remarkable image-recovery improvement by the CNN is deblurring of boundaries of bone structures and air cavities. The CNN output yields an approximately 10% higher peak signal-to-noise ratio and lower normalized root mean square error than the input (thicker slices). The CNN output noise level is lower than the ground truth and equivalent to the iterative image reconstruction result. The proposed deep learning method is useful for both super-resolution and de-noising.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Enfermedad de Parkinson/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Anciano , Femenino , Humanos , Masculino , Dosis de Radiación , Relación Señal-Ruido
17.
Hum Brain Mapp ; 39(9): 3769-3778, 2018 09.
Artículo en Inglés | MEDLINE | ID: mdl-29752765

RESUMEN

Accurate spatial normalization (SN) of amyloid positron emission tomography (PET) images for Alzheimer's disease assessment without coregistered anatomical magnetic resonance imaging (MRI) of the same individual is technically challenging. In this study, we applied deep neural networks to generate individually adaptive PET templates for robust and accurate SN of amyloid PET without using matched 3D MR images. Using 681 pairs of simultaneously acquired 11 C-PIB PET and T1-weighted 3D MRI scans of AD, MCI, and cognitively normal subjects, we trained and tested two deep neural networks [convolutional auto-encoder (CAE) and generative adversarial network (GAN)] that produce adaptive best PET templates. More specifically, the networks were trained using 685,100 pieces of augmented data generated by rotating 527 randomly selected datasets and validated using 154 datasets. The input to the supervised neural networks was the 3D PET volume in native space and the label was the spatially normalized 3D PET image using the transformation parameters obtained from MRI-based SN. The proposed deep learning approach significantly enhanced the quantitative accuracy of MRI-less amyloid PET assessment by reducing the SN error observed when an average amyloid PET template is used. Given an input image, the trained deep neural networks rapidly provide individually adaptive 3D PET templates without any discontinuity between the slices (in 0.02 s). As the proposed method does not require 3D MRI for the SN of PET images, it has great potential for use in routine analysis of amyloid PET images in clinical practice and research.


Asunto(s)
Enfermedad de Alzheimer/diagnóstico por imagen , Amiloide/análisis , Encéfalo/diagnóstico por imagen , Aprendizaje Profundo , Tomografía de Emisión de Positrones/métodos , Aprendizaje Automático Supervisado , Algoritmos , Enfermedad de Alzheimer/patología , Compuestos de Anilina , Benzotiazoles , Encéfalo/patología , Radioisótopos de Carbono , Disfunción Cognitiva/diagnóstico por imagen , Disfunción Cognitiva/patología , Femenino , Humanos , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética , Masculino , Radiofármacos , Tiazoles
18.
Phys Med Biol ; 63(11): 115015, 2018 06 07.
Artículo en Inglés | MEDLINE | ID: mdl-29658493

RESUMEN

Here, we propose a novel method to identify inter-crystal scattering (ICS) events from a PET detector that is even applicable to light-sharing designs. In the proposed method, the detector observation was considered as a linear problem and ICS events were identified by solving this problem. Two ICS identification methods were suggested for solving the linear problem, pseudoinverse matrix calculation and convex constrained optimization. The proposed method was evaluated based on simulation and experimental studies. For the simulation study, an 8 × 8 photo sensor was coupled to 8 × 8, 10 × 10 and 12 × 12 crystal arrays to simulate a one-to-one coupling and two light-sharing detectors, respectively. The identification rate, the rate that the identified ICS events correctly include the true first interaction position and the energy linearity were evaluated for the proposed ICS identification methods. For the experimental study, a digital silicon photomultiplier was coupled with 8 × 8 and 10 × 10 arrays of 3 × 3 × 20 mm3 LGSO crystals to construct the one-to-one coupling and light-sharing detectors, respectively. Intrinsic spatial resolutions were measured for two detector types. The proposed ICS identification methods were implemented, and intrinsic resolutions were compared with and without ICS recovery. As a result, the simulation study showed that the proposed convex optimization method yielded robust energy estimation and high ICS identification rates of 0.93 and 0.87 for the one-to-one and light-sharing detectors, respectively. The experimental study showed a resolution improvement after recovering the identified ICS events into the first interaction position. The average intrinsic spatial resolutions for the one-to-one and light-sharing detector were 1.95 and 2.25 mm in the FWHM without ICS recovery, respectively. These values improved to 1.72 and 1.83 mm after ICS recovery, respectively. In conclusion, our proposed method showed good ICS identification in both one-to-one coupling and light-sharing detectors. We experimentally validated that the ICS recovery based on the proposed identification method led to an improved resolution.


Asunto(s)
Fotones , Tomografía de Emisión de Positrones/instrumentación , Conteo por Cintilación/instrumentación , Tomografía de Emisión de Positrones/métodos , Dispersión de Radiación , Conteo por Cintilación/métodos
19.
J Nucl Med ; 59(10): 1624-1629, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-29449446

RESUMEN

Simultaneous reconstruction of activity and attenuation using the maximum-likelihood reconstruction of activity and attenuation (MLAA) augmented by time-of-flight information is a promising method for PET attenuation correction. However, it still suffers from several problems, including crosstalk artifacts, slow convergence speed, and noisy attenuation maps (µ-maps). In this work, we developed deep convolutional neural networks (CNNs) to overcome these MLAA limitations, and we verified their feasibility using a clinical brain PET dataset. Methods: We applied the proposed method to one of the most challenging PET cases for simultaneous image reconstruction (18F-fluorinated-N-3-fluoropropyl-2-ß-carboxymethoxy-3-ß-(4-iodophenyl)nortropane [18F-FP-CIT] PET scans with highly specific binding to striatum of the brain). Three different CNN architectures (convolutional autoencoder [CAE], Unet, and Hybrid of CAE) were designed and trained to learn a CT-derived µ-map (µ-CT) from the MLAA-generated activity distribution and µ-map (µ-MLAA). The PET/CT data of 40 patients with suspected Parkinson disease were used for 5-fold cross-validation. For the training of CNNs, 800,000 transverse PET and CT slices augmented from 32 patient datasets were used. The similarity to µ-CT of the CNN-generated µ-maps (µ-CAE, µ-Unet, and µ-Hybrid) and µ-MLAA was compared using Dice similarity coefficients. In addition, we compared the activity concentration of specific (striatum) and nonspecific (cerebellum and occipital cortex) binding regions and the binding ratios in the striatum in the PET activity images reconstructed using those µ-maps. Results: The CNNs generated less noisy and more uniform µ-maps than the original µ-MLAA. Moreover, the air cavities and bones were better resolved in the proposed CNN outputs. In addition, the proposed deep learning approach was useful for mitigating the crosstalk problem in the MLAA reconstruction. The Hybrid network of CAE and Unet yielded the most similar µ-maps to µ-CT (Dice similarity coefficient in the whole head = 0.79 in the bone and 0.72 in air cavities), resulting in only about a 5% error in activity and binding ratio quantification. Conclusion: The proposed deep learning approach is promising for accurate attenuation correction of activity distribution in time-of-flight PET systems.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Anciano , Encéfalo/diagnóstico por imagen , Encéfalo/metabolismo , Dopamina/metabolismo , Femenino , Humanos , Masculino , Tomografía Computarizada por Tomografía de Emisión de Positrones , Factores de Tiempo
20.
Anticancer Res ; 37(3): 1139-1148, 2017 03.
Artículo en Inglés | MEDLINE | ID: mdl-28314275

RESUMEN

BACKGROUND/AIM: To compare the relationship between Ktrans from DCE-MRI and K1 from dynamic 13N-NH3-PET, with simultaneous and separate MR/PET in the VX-2 rabbit carcinoma model. MATERIALS AND METHODS: MR/PET was performed simultaneously and separately, 14 and 15 days after VX-2 tumor implantation at the paravertebral muscle. The Ktrans and K1 values were estimated using an in-house software program. The relationships between Ktrans and K1 were analyzed using Pearson's correlation coefficients and linear/non-linear regression function. RESULTS: Assuming a linear relationship, Ktrans and K1 exhibited a moderate positive correlations with both simultaneous (r=0.54-0.57) and separate (r=0.53-0.69) imaging. However, while the Ktrans and K1 from separate imaging were linearly correlated, those from simultaneous imaging exhibited a non-linear relationship. The amount of change in K1 associated with a unit increase in Ktrans varied depending on Ktrans values. CONCLUSION: The relationship between Ktrans and K1 may be mis-interpreted with separate MR and PET acquisition.


Asunto(s)
Imagen por Resonancia Magnética , Neoplasias/patología , Tomografía de Emisión de Positrones , Animales , Carcinoma/metabolismo , Línea Celular Tumoral , Medios de Contraste/química , Procesamiento de Imagen Asistido por Computador , Modelos Lineales , Imagen Multimodal/métodos , Trasplante de Neoplasias , Perfusión , Conejos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...