Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Eur J Nucl Med Mol Imaging ; 49(6): 1833-1842, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-34882262

RESUMO

PURPOSE: This study aims to compare two approaches using only emission PET data and a convolution neural network (CNN) to correct the attenuation (µ) of the annihilation photons in PET. METHODS: One of the approaches uses a CNN to generate µ-maps from the non-attenuation-corrected (NAC) PET images (µ-CNNNAC). In the other method, CNN is used to improve the accuracy of µ-maps generated using maximum likelihood estimation of activity and attenuation (MLAA) reconstruction (µ-CNNMLAA). We investigated the improvement in the CNN performance by combining the two methods (µ-CNNMLAA+NAC) and the suitability of µ-CNNNAC for providing the scatter distribution required for MLAA reconstruction. Image data from 18F-FDG (n = 100) or 68 Ga-DOTATOC (n = 50) PET/CT scans were used for neural network training and testing. RESULTS: The error of the attenuation correction factors estimated using µ-CT and µ-CNNNAC was over 7%, but that of scatter estimates was only 2.5%, indicating the validity of the scatter estimation from µ-CNNNAC. However, CNNNAC provided less accurate bone structures in the µ-maps, while the best results in recovering the fine bone structures were obtained by applying CNNMLAA+NAC. Additionally, the µ-values in the lungs were overestimated by CNNNAC. Activity images (λ) corrected for attenuation using µ-CNNMLAA and µ-CNNMLAA+NAC were superior to those corrected using µ-CNNNAC, in terms of their similarity to λ-CT. However, the improvement in the similarity with λ-CT by combining the CNNNAC and CNNMLAA approaches was insignificant (percent error for lung cancer lesions, λ-CNNNAC = 5.45% ± 7.88%; λ-CNNMLAA = 1.21% ± 5.74%; λ-CNNMLAA+NAC = 1.91% ± 4.78%; percent error for bone cancer lesions, λ-CNNNAC = 1.37% ± 5.16%; λ-CNNMLAA = 0.23% ± 3.81%; λ-CNNMLAA+NAC = 0.05% ± 3.49%). CONCLUSION: The use of CNNNAC was feasible for scatter estimation to address the chicken-egg dilemma in MLAA reconstruction, but CNNMLAA outperformed CNNNAC.


Assuntos
Aprendizado Profundo , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Fluordesoxiglucose F18 , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Tomografia por Emissão de Pósitrons/métodos
2.
Eur J Nucl Med Mol Imaging ; 49(9): 3061-3072, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35226120

RESUMO

PURPOSE: Alzheimer's disease (AD) studies revealed that abnormal deposition of tau spreads in a specific spatial pattern, namely Braak stage. However, Braak staging is based on post mortem brains, each of which represents the cross section of the tau trajectory in disease progression, and numerous studies were reported that do not conform to that model. This study thus aimed to identify the tau trajectory and quantify the tau progression in a data-driven approach with the continuous latent space learned by variational autoencoder (VAE). METHODS: A total of 1080 [18F]Flortaucipir brain positron emission tomography (PET) images were collected from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. VAE was built to compress the hidden features from tau images in latent space. Hierarchical agglomerative clustering and minimum spanning tree (MST) were applied to organize the features and calibrate them to the tau progression, thus deriving pseudo-time. The image-level tau trajectory was inferred by continuously sampling across the calibrated latent features. We assessed the pseudo-time with regard to tau standardized uptake value ratio (SUVr) in AD-vulnerable regions, amyloid deposit, glucose metabolism, cognitive scores, and clinical diagnosis. RESULTS: We identified four clusters that plausibly capture certain stages of AD and organized the clusters in the latent space. The inferred tau trajectory agreed with the Braak staging. According to the derived pseudo-time, tau first deposits in the parahippocampal and amygdala, and then spreads to the fusiform, inferior temporal lobe, and posterior cingulate. Prior to the regional tau deposition, amyloid accumulates first. CONCLUSION: The spatiotemporal trajectory of tau progression inferred in this study was consistent with Braak staging. The profile of other biomarkers in disease progression agreed well with previous findings. We addressed that this approach additionally has the potential to quantify tau progression as a continuous variable by taking a whole-brain tau image into account.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Doença de Alzheimer/metabolismo , Encéfalo/metabolismo , Carbolinas , Disfunção Cognitiva/metabolismo , Progressão da Doença , Humanos , Tomografia por Emissão de Pósitrons/métodos , Proteínas tau/metabolismo
3.
Mol Psychiatry ; 26(7): 3476-3488, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-32929214

RESUMO

Although antipsychotic drugs are effective for relieving the psychotic symptoms of first-episode psychosis (FEP), psychotic relapse is common during the course of the illness. While some FEPs remain remitted even without medication, antipsychotic discontinuation is regarded as the most common risk factor for the relapse. Considering the actions of antipsychotic drugs on presynaptic and postsynaptic dopamine dysregulation, this study evaluated possible mechanisms underlying relapse after antipsychotic discontinuation. Twenty five FEPs who were clinically stable and 14 matched healthy controls were enrolled. Striatal dopamine activity was assessed as Kicer value using [18F]DOPA PET before and 6 weeks after antipsychotic discontinuation. The D2/3 receptor availability was measured as BPND using [11C]raclopride PET after antipsychotic discontinuation. Healthy controls also underwent PET scans according to the corresponding schedule of the patients. Patients were monitored for psychotic relapse during 12 weeks after antipsychotic discontinuation. 40% of the patients showed psychotic relapse after antipsychotic discontinuation. The change in Kicer value over time significantly differed between relapsed, non-relapsed patients and healthy controls (Week*Group: F = 4.827, df = 2,253.193, p = 0.009). In relapsed patients, a significant correlation was found between baseline striatal Kicer values and time to relapse after antipsychotic discontinuation (R2 = 0.518, p = 0.018). BPND were not significantly different between relapsed, non-relapsed patients and healthy controls (F = 1.402, df = 2,32.000, p = 0.261). These results suggest that dysfunctional dopamine autoregulation might precipitate psychotic relapse after antipsychotic discontinuation in FEP. This finding could be used for developing a strategy for the prevention of psychotic relapse related to antipsychotic discontinuation.


Assuntos
Antipsicóticos , Transtornos Psicóticos , Antipsicóticos/uso terapêutico , Di-Hidroxifenilalanina , Dopamina/uso terapêutico , Humanos , Tomografia por Emissão de Pósitrons , Transtornos Psicóticos/diagnóstico por imagem , Transtornos Psicóticos/tratamento farmacológico , Racloprida , Recidiva
4.
Neuroimage ; 232: 117890, 2021 05 15.
Artigo em Inglês | MEDLINE | ID: mdl-33617991

RESUMO

It is challenging to compare amyloid PET images obtained with different radiotracers. Here, we introduce a new approach to improve the interchangeability of amyloid PET acquired with different radiotracers through image-level translation. Deep generative networks were developed using unpaired PET datasets, consisting of 203 [11C]PIB and 850 [18F]florbetapir brain PET images. Using 15 paired PET datasets, the standardized uptake value ratio (SUVR) values obtained from pseudo-PIB or pseudo-florbetapir PET images translated using the generative networks was compared to those obtained from the original images. The generated amyloid PET images showed similar distribution patterns with original amyloid PET of different radiotracers. The SUVR obtained from the original [18F]florbetapir PET was lower than those obtained from the original [11C]PIB PET. The translated amyloid PET images reduced the difference in SUVR. The SUVR obtained from the pseudo-PIB PET images generated from [18F]florbetapir PET showed a good agreement with those of the original PIB PET (ICC = 0.87 for global SUVR). The SUVR obtained from the pseudo-florbetapir PET also showed a good agreement with those of the original [18F]florbetapir PET (ICC = 0.85 for global SUVR). The ICC values between the original and generated PET images were higher than those between original [11C]PIB and [18F]florbetapir images (ICC = 0.65 for global SUVR). Our approach provides the image-level translation of amyloid PET images obtained using different radiotracers. It may facilitate the clinical studies designed with variable amyloid PET images due to long-term clinical follow-up as well as multicenter trials by enabling the translation of different types of amyloid PET.


Assuntos
Amiloide/metabolismo , Compostos de Anilina/metabolismo , Encéfalo/metabolismo , Aprendizado Profundo , Tomografia por Emissão de Pósitrons/métodos , Estilbenos/metabolismo , Tiazóis/metabolismo , Idoso , Idoso de 80 Anos ou mais , Encéfalo/diagnóstico por imagem , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Masculino , Compostos Radiofarmacêuticos/metabolismo
5.
Hum Brain Mapp ; 39(9): 3769-3778, 2018 09.
Artigo em Inglês | MEDLINE | ID: mdl-29752765

RESUMO

Accurate spatial normalization (SN) of amyloid positron emission tomography (PET) images for Alzheimer's disease assessment without coregistered anatomical magnetic resonance imaging (MRI) of the same individual is technically challenging. In this study, we applied deep neural networks to generate individually adaptive PET templates for robust and accurate SN of amyloid PET without using matched 3D MR images. Using 681 pairs of simultaneously acquired 11 C-PIB PET and T1-weighted 3D MRI scans of AD, MCI, and cognitively normal subjects, we trained and tested two deep neural networks [convolutional auto-encoder (CAE) and generative adversarial network (GAN)] that produce adaptive best PET templates. More specifically, the networks were trained using 685,100 pieces of augmented data generated by rotating 527 randomly selected datasets and validated using 154 datasets. The input to the supervised neural networks was the 3D PET volume in native space and the label was the spatially normalized 3D PET image using the transformation parameters obtained from MRI-based SN. The proposed deep learning approach significantly enhanced the quantitative accuracy of MRI-less amyloid PET assessment by reducing the SN error observed when an average amyloid PET template is used. Given an input image, the trained deep neural networks rapidly provide individually adaptive 3D PET templates without any discontinuity between the slices (in 0.02 s). As the proposed method does not require 3D MRI for the SN of PET images, it has great potential for use in routine analysis of amyloid PET images in clinical practice and research.


Assuntos
Doença de Alzheimer/diagnóstico por imagem , Amiloide/análise , Encéfalo/diagnóstico por imagem , Aprendizado Profundo , Tomografia por Emissão de Pósitrons/métodos , Aprendizado de Máquina Supervisionado , Algoritmos , Doença de Alzheimer/patologia , Compostos de Anilina , Benzotiazóis , Encéfalo/patologia , Radioisótopos de Carbono , Disfunção Cognitiva/diagnóstico por imagem , Disfunção Cognitiva/patologia , Feminino , Humanos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética , Masculino , Compostos Radiofarmacêuticos , Tiazóis
6.
Nucl Med Mol Imaging ; 58(6): 354-363, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39308485

RESUMO

Purpose: Dopamine transporter imaging is crucial for assessing presynaptic dopaminergic neurons in Parkinson's disease (PD) and related parkinsonian disorders. While 18F-FP-CIT PET offers advantages in spatial resolution and sensitivity over 123I-ß-CIT or 123I-FP-CIT SPECT imaging, accurate quantification remains essential. This study presents a novel automatic quantification method for 18F-FP-CIT PET images, utilizing an artificial intelligence (AI)-based robust PET spatial normalization (SN) technology that eliminates the need for anatomical images. Methods: The proposed SN engine consists of convolutional neural networks, trained using 213 paired datasets of 18F-FP-CIT PET and 3D structural MRI. Remarkably, only PET images are required as input during inference. A cyclic training strategy enables backward deformation from template to individual space. An additional 89 paired 18F-FP-CIT PET and 3D MRI datasets were used to evaluate the accuracy of striatal activity quantification. MRI-based PET quantification using FIRST software was also conducted for comparison. The proposed method was also validated using 135 external datasets. Results: The proposed AI-based method successfully generated spatially normalized 18F-FP-CIT PET images, obviating the need for CT or MRI. The striatal PET activity determined by proposed PET-only method and MRI-based PET quantification using FIRST algorithm were highly correlated, with R 2 and slope ranging 0.96-0.99 and 0.98-1.02 in both internal and external datasets. Conclusion: Our AI-based SN method enables accurate automatic quantification of striatal activity in 18F-FP-CIT brain PET images without MRI support. This approach holds promise for evaluating presynaptic dopaminergic function in PD and related parkinsonian disorders.

7.
J Nucl Med ; 65(10): 1645-1651, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39209545

RESUMO

Quantification of 18F-FDG PET images is useful for accurate diagnosis and evaluation of various brain diseases, including brain tumors, epilepsy, dementia, and Parkinson disease. However, accurate quantification of 18F-FDG PET images requires matched 3-dimensional T1 MRI scans of the same individuals to provide detailed information on brain anatomy. In this paper, we propose a transfer learning approach to adapt a pretrained deep neural network model from amyloid PET to spatially normalize 18F-FDG PET images without the need for 3-dimensional MRI. Methods: The proposed method is based on a deep learning model for automatic spatial normalization of 18F-FDG brain PET images, which was developed by fine-tuning a pretrained model for amyloid PET using only 103 18F-FDG PET and MR images. After training, the algorithm was tested on 65 internal and 78 external test sets. All T1 MR images with a 1-mm isotropic voxel size were processed with FreeSurfer software to provide cortical segmentation maps used to extract a ground-truth regional SUV ratio using cerebellar gray matter as a reference region. These values were compared with those from spatial normalization-based quantification methods using the proposed method and statistical parametric mapping software. Results: The proposed method showed superior spatial normalization compared with statistical parametric mapping, as evidenced by increased normalized mutual information and better size and shape matching in PET images. Quantitative evaluation revealed a consistently higher SUV ratio correlation and intraclass correlation coefficients for the proposed method across various brain regions in both internal and external datasets. The remarkably good correlation and intraclass correlation coefficient values of the proposed method for the external dataset are noteworthy, considering the dataset's different ethnic distribution and the use of different PET scanners and image reconstruction algorithms. Conclusion: This study successfully applied transfer learning to a deep neural network for 18F-FDG PET spatial normalization, demonstrating its resource efficiency and improved performance. This highlights the efficacy of transfer learning, which requires a smaller number of datasets than does the original network training, thus increasing the potential for broader use of deep learning-based brain PET spatial normalization techniques for various clinical and research radiotracers.


Assuntos
Encéfalo , Fluordesoxiglucose F18 , Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Humanos , Tomografia por Emissão de Pósitrons/métodos , Processamento de Imagem Assistida por Computador/métodos , Encéfalo/diagnóstico por imagem , Masculino , Feminino , Aprendizado Profundo , Imageamento por Ressonância Magnética , Pessoa de Meia-Idade , Idoso
8.
Br J Radiol ; 97(1155): 632-639, 2024 Feb 28.
Artigo em Inglês | MEDLINE | ID: mdl-38265235

RESUMO

OBJECTIVES: To develop and validate a super-resolution (SR) algorithm generating clinically feasible chest radiographs from 64-fold reduced data. METHODS: An SR convolutional neural network was trained to produce original-resolution images (output) from 64-fold reduced images (input) using 128 × 128 patches (n = 127 030). For validation, 112 radiographs-including those with pneumothorax (n = 17), nodules (n = 20), consolidations (n = 18), and ground-glass opacity (GGO; n = 16)-were collected. Three image sets were prepared: the original images and those reconstructed using SR and conventional linear interpolation (LI) using 64-fold reduced data. The mean-squared error (MSE) was calculated to measure similarity between the reconstructed and original images, and image noise was quantified. Three thoracic radiologists evaluated the quality of each image and decided whether any abnormalities were present. RESULTS: The SR-images were more similar to the original images than the LI-reconstructed images (MSE: 9269 ± 1015 vs. 9429 ± 1057; P = .02). The SR-images showed lower measured noise and scored better noise level by three radiologists than both original and LI-reconstructed images (Ps < .01). The radiologists' pooled sensitivity with the SR-reconstructed images was not significantly different compared with the original images for detecting pneumothorax (SR vs. original, 90.2% [46/51] vs. 96.1% [49/51]; P = .19), nodule (90.0% [54/60] vs. 85.0% [51/60]; P = .26), consolidation (100% [54/54] vs. 96.3% [52/54]; P = .50), and GGO (91.7% [44/48] vs. 95.8% [46/48]; P = .69). CONCLUSIONS: SR-reconstructed chest radiographs using 64-fold reduced data showed a lower noise level than the original images, with equivalent sensitivity for detecting major abnormalities. ADVANCES IN KNOWLEDGE: This is the first study applying super-resolution in data reduction of chest radiographs.


Assuntos
Pneumopatias , Pneumotórax , Humanos , Pneumotórax/diagnóstico por imagem , Redes Neurais de Computação , Radiografia , Algoritmos
9.
Phys Med Biol ; 69(21)2024 Oct 21.
Artigo em Inglês | MEDLINE | ID: mdl-39312947

RESUMO

Objective.Bone scans play an important role in skeletal lesion assessment, but gamma cameras exhibit challenges with low sensitivity and high noise levels. Deep learning (DL) has emerged as a promising solution to enhance image quality without increasing radiation exposure or scan time. However, existing self-supervised denoising methods, such as Noise2Noise (N2N), may introduce deviations from the clinical standard in bone scans. This study proposes an improved self-supervised denoising technique to minimize discrepancies between DL-based denoising and full scan images.Approach.Retrospective analysis of 351 whole-body bone scan data sets was conducted. In this study, we used N2N and Noise2FullCount (N2F) denoising models, along with an interpolated version of N2N (iN2N). Denoising networks were separately trained for each reduced scan time from 5 to 50%, and also trained for mixed training datasets, which include all shortened scans. We performed quantitative analysis and clinical evaluation by nuclear medicine experts.Main results.The denoising networks effectively generated images resembling full scans, with N2F revealing distinctive patterns for different scan times, N2N producing smooth textures with slight blurring, and iN2N closely mirroring full scan patterns. Quantitative analysis showed that denoising improved with longer input times and mixed count training outperformed fixed count training. Traditional denoising methods lagged behind DL-based denoising. N2N demonstrated limitations in long-scan images. Clinical evaluation favored N2N and iN2N in resolution, noise, blurriness, and findings, showcasing their potential for enhanced diagnostic performance in quarter-time scans.Significance.The improved self-supervised denoising technique presented in this study offers a viable solution to enhance bone scan image quality, minimizing deviations from clinical standards. The method's effectiveness was demonstrated quantitatively and clinically, showing promise for quarter-time scans without compromising diagnostic performance. This approach holds potential for improving bone scan interpretations, aiding in more accurate clinical diagnoses.


Assuntos
Osso e Ossos , Processamento de Imagem Assistida por Computador , Razão Sinal-Ruído , Humanos , Osso e Ossos/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Aprendizado Profundo , Estudos Retrospectivos , Masculino
10.
Nucl Med Mol Imaging ; 58(4): 246-254, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38932756

RESUMO

Purpose: This study assesses the clinical performance of BTXBrain-Amyloid, an artificial intelligence-powered software for quantifying amyloid uptake in brain PET images. Methods: 150 amyloid brain PET images were visually assessed by experts and categorized as negative and positive. Standardized uptake value ratio (SUVR) was calculated with cerebellum grey matter as the reference region, and receiver operating characteristic (ROC) and precision-recall (PR) analysis for BTXBrain-Amyloid were conducted. For comparison, same image processing and analysis was performed using Statistical Parametric Mapping (SPM) program. In addition, to evaluate the spatial normalization (SN) performance, mutual information (MI) between MRI template and spatially normalized PET images was calculated and SPM group analysis was conducted. Results: Both BTXBrain and SPM methods discriminated between negative and positive groups. However, BTXBrain exhibited lower SUVR standard deviation (0.06 and 0.21 for negative and positive, respectively) than SPM method (0.11 and 0.25). In ROC analysis, BTXBrain had an AUC of 0.979, compared to 0.959 for SPM, while PR curves showed an AUC of 0.983 for BTXBrain and 0.949 for SPM. At the optimal cut-off, the sensitivity and specificity were 0.983 and 0.921 for BTXBrain and 0.917 and 0.921 for SPM12, respectively. MI evaluation also favored BTXBrain (0.848 vs. 0.823), indicating improved SN. In SPM group analysis, BTXBrain exhibited higher sensitivity in detecting basal ganglia differences between negative and positive groups. Conclusion: BTXBrain-Amyloid outperformed SPM in clinical performance evaluation, also demonstrating superior SN and improved detection of deep brain differences. These results suggest the potential of BTXBrain-Amyloid as a valuable tool for clinical amyloid PET image evaluation.

11.
J Nucl Med ; 64(4): 659-666, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36328490

RESUMO

This paper proposes a novel method for automatic quantification of amyloid PET using deep learning-based spatial normalization (SN) of PET images, which does not require MRI or CT images of the same patient. The accuracy of the method was evaluated for 3 different amyloid PET radiotracers compared with MRI-parcellation-based PET quantification using FreeSurfer. Methods: A deep neural network model used for the SN of amyloid PET images was trained using 994 multicenter amyloid PET images (367 18F-flutemetamol and 627 18F-florbetaben) and the corresponding 3-dimensional MR images of subjects who had Alzheimer disease or mild cognitive impairment or were cognitively normal. For comparison, PET SN was also conducted using version 12 of the Statistical Parametric Mapping program (SPM-based SN). The accuracy of deep learning-based and SPM-based SN and SUV ratio quantification relative to the FreeSurfer-based estimation in individual brain spaces was evaluated using 148 other amyloid PET images (64 18F-flutemetamol and 84 18F-florbetaben). Additional external validation was performed using an unseen independent external dataset (30 18F-flutemetamol, 67 18F-florbetaben, and 39 18F-florbetapir). Results: Quantification results using the proposed deep learning-based method showed stronger correlations with the FreeSurfer estimates than SPM-based SN using MRI did. For example, the slope, y-intercept, and R 2 values between SPM and FreeSurfer for the global cortex were 0.869, 0.113, and 0.946, respectively. In contrast, the slope, y-intercept, and R 2 values between the proposed deep learning-based method and FreeSurfer were 1.019, -0.016, and 0.986, respectively. The external validation study also demonstrated better performance for the proposed method without MR images than for SPM with MRI. In most brain regions, the proposed method outperformed SPM SN in terms of linear regression parameters and intraclass correlation coefficients. Conclusion: We evaluated a novel deep learning-based SN method that allows quantitative analysis of amyloid brain PET images without structural MRI. The quantification results using the proposed method showed a strong correlation with MRI-parcellation-based quantification using FreeSurfer for all clinical amyloid radiotracers. Therefore, the proposed method will be useful for investigating Alzheimer disease and related brain disorders using amyloid PET scans.


Assuntos
Doença de Alzheimer , Humanos , Doença de Alzheimer/diagnóstico por imagem , Compostos de Anilina , Encéfalo/diagnóstico por imagem , Amiloide , Proteínas Amiloidogênicas , Tomografia por Emissão de Pósitrons/métodos , Redes Neurais de Computação , Imageamento por Ressonância Magnética/métodos
12.
Nucl Med Mol Imaging ; 57(2): 86-93, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36998591

RESUMO

Purpose: Since accurate lung cancer segmentation is required to determine the functional volume of a tumor in [18F]FDG PET/CT, we propose a two-stage U-Net architecture to enhance the performance of lung cancer segmentation using [18F]FDG PET/CT. Methods: The whole-body [18F]FDG PET/CT scan data of 887 patients with lung cancer were retrospectively used for network training and evaluation. The ground-truth tumor volume of interest was drawn using the LifeX software. The dataset was randomly partitioned into training, validation, and test sets. Among the 887 PET/CT and VOI datasets, 730 were used to train the proposed models, 81 were used as the validation set, and the remaining 76 were used to evaluate the model. In Stage 1, the global U-net receives 3D PET/CT volume as input and extracts the preliminary tumor area, generating a 3D binary volume as output. In Stage 2, the regional U-net receives eight consecutive PET/CT slices around the slice selected by the Global U-net in Stage 1 and generates a 2D binary image as the output. Results: The proposed two-stage U-Net architecture outperformed the conventional one-stage 3D U-Net in primary lung cancer segmentation. The two-stage U-Net model successfully predicted the detailed margin of the tumors, which was determined by manually drawing spherical VOIs and applying an adaptive threshold. Quantitative analysis using the Dice similarity coefficient confirmed the advantages of the two-stage U-Net. Conclusion: The proposed method will be useful for reducing the time and effort required for accurate lung cancer segmentation in [18F]FDG PET/CT.

13.
Phys Med Biol ; 66(9)2021 04 27.
Artigo em Inglês | MEDLINE | ID: mdl-33780912

RESUMO

Advances in simultaneous positron emission tomography/magnetic resonance imaging (PET/MRI) technology have led to an active investigation of the anatomy-guided regularized PET image reconstruction algorithm based on MR images. Among the various priors proposed for anatomy-guided regularized PET image reconstruction, Bowsher's method based on second-order smoothing priors sometimes suffers from over-smoothing of detailed structures. Therefore, in this study, we propose a Bowsher prior based on thel1-norm and an iteratively reweighting scheme to overcome the limitation of the original Bowsher method. In addition, we have derived a closed solution for iterative image reconstruction based on this non-smooth prior. A comparison study between the originall2and proposedl1Bowsher priors was conducted using computer simulation and real human data. In the simulation and real data application, small lesions with abnormal PET uptake were better detected by the proposedl1Bowsher prior methods than the original Bowsher prior. The originall2Bowsher leads to a decreased PET intensity in small lesions when there is no clear separation between the lesions and surrounding tissue in the anatomical prior. However, the proposedl1Bowsher prior methods showed better contrast between the tumors and surrounding tissues owing to the intrinsic edge-preserving property of the prior which is attributed to the sparseness induced byl1-norm, especially in the iterative reweighting scheme. Besides, the proposed methods demonstrated lower bias and less hyper-parameter dependency on PET intensity estimation in the regions with matched anatomical boundaries in PET and MRI. Therefore, these methods will be useful for improving the PET image quality based on the anatomical side information.


Assuntos
Tomografia por Emissão de Pósitrons , Algoritmos , Simulação por Computador , Humanos , Imagens de Fantasmas
14.
Phys Med Biol ; 66(11)2021 05 20.
Artigo em Inglês | MEDLINE | ID: mdl-33910170

RESUMO

We propose a deep learning-based data-driven respiratory phase-matched gated-PET attenuation correction (AC) method that does not need a gated-CT. The proposed method is a multi-step process that consists of data-driven respiratory gating, gated attenuation map estimation using maximum-likelihood reconstruction of attenuation and activity (MLAA) algorithm, and enhancement of the gated attenuation maps using convolutional neural network (CNN). The gated MLAA attenuation maps enhanced by the CNN allowed for the phase-matched AC of gated-PET images. We conducted a non-rigid registration of the gated-PET images to generate motion-free PET images. We trained the CNN by conducting a 3D patch-based learning with 80 oncologic whole-body18F-fluorodeoxyglucose (18F-FDG) PET/CT scan data and applied it to seven regional PET/CT scans that cover the lower lung and upper liver. We investigated the impact of the proposed respiratory phase-matched AC of PET without utilizing CT on tumor size and standard uptake value (SUV) assessment, and PET image quality (%STD). The attenuation corrected gated and motion-free PET images generated using the proposed method yielded sharper organ boundaries and better noise characteristics than conventional gated and ungated PET images. A banana artifact observed in a phase-mismatched CT-based AC was not observed in the proposed approach. By employing the proposed method, the size of tumor was reduced by 12.3% and SUV90%was increased by 13.3% in tumors with larger movements than 5 mm. %STD of liver uptake was reduced by 11.1%. The deep learning-based data-driven respiratory phase-matched AC method improved the PET image quality and reduced the motion artifacts.


Assuntos
Artefatos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Algoritmos , Fluordesoxiglucose F18 , Processamento de Imagem Assistida por Computador , Movimento , Tomografia por Emissão de Pósitrons
15.
Biomed Eng Lett ; 11(3): 263-271, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34350052

RESUMO

Although MR-guided radiotherapy (MRgRT) is advancing rapidly, generating accurate synthetic CT (sCT) from MRI is still challenging. Previous approaches using deep neural networks require large dataset of precisely co-registered CT and MRI pairs that are difficult to obtain due to respiration and peristalsis. Here, we propose a method to generate sCT based on deep learning training with weakly paired CT and MR images acquired from an MRgRT system using a cycle-consistent GAN (CycleGAN) framework that allows the unpaired image-to-image translation in abdomen and thorax. Data from 90 cancer patients who underwent MRgRT were retrospectively used. CT images of the patients were aligned to the corresponding MR images using deformable registration, and the deformed CT (dCT) and MRI pairs were used for network training and testing. The 2.5D CycleGAN was constructed to generate sCT from the MRI input. To improve the sCT generation performance, a perceptual loss that explores the discrepancy between high-dimensional representations of images extracted from a well-trained classifier was incorporated into the CycleGAN. The CycleGAN with perceptual loss outperformed the U-net in terms of errors and similarities between sCT and dCT, and dose estimation for treatment planning of thorax, and abdomen. The sCT generated using CycleGAN produced virtually identical dose distribution maps and dose-volume histograms compared to dCT. CycleGAN with perceptual loss outperformed U-net in sCT generation when trained with weakly paired dCT-MRI for MRgRT. The proposed method will be useful to increase the treatment accuracy of MR-only or MR-guided adaptive radiotherapy. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s13534-021-00195-8.

16.
Sci Rep ; 11(1): 1673, 2021 01 18.
Artigo em Inglês | MEDLINE | ID: mdl-33462321

RESUMO

The detailed anatomical information of the brain provided by 3D magnetic resonance imaging (MRI) enables various neuroscience research. However, due to the long scan time for 3D MR images, 2D images are mainly obtained in clinical environments. The purpose of this study is to generate 3D images from a sparsely sampled 2D images using an inpainting deep neural network that has a U-net-like structure and DenseNet sub-blocks. To train the network, not only fidelity loss but also perceptual loss based on the VGG network were considered. Various methods were used to assess the overall similarity between the inpainted and original 3D data. In addition, morphological analyzes were performed to investigate whether the inpainted data produced local features similar to the original 3D data. The diagnostic ability using the inpainted data was also evaluated by investigating the pattern of morphological changes in disease groups. Brain anatomy details were efficiently recovered by the proposed neural network. In voxel-based analysis to assess gray matter volume and cortical thickness, differences between the inpainted data and the original 3D data were observed only in small clusters. The proposed method will be useful for utilizing advanced neuroimaging techniques with 2D MRI data.


Assuntos
Encéfalo/anatomia & histologia , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Neuroimagem/métodos , Idoso , Encéfalo/diagnóstico por imagem , Feminino , Humanos , Masculino , Redes Neurais de Computação
17.
Nucl Med Mol Imaging ; 54(6): 299-304, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33282001

RESUMO

PURPOSE: Early deep-learning-based image denoising techniques mainly focused on a fully supervised model that learns how to generate a clean image from the noisy input (noise2clean: N2C). The aim of this study is to explore the feasibility of the self-supervised methods (noise2noise: N2N and noiser2noise: Nr2N) for PET image denoising based on the measured PET data sets by comparing their performance with the conventional N2C model. METHODS: For training and evaluating the networks, 18F-FDG brain PET/CT scan data of 14 patients was retrospectively used (10 for training and 4 for testing). From the 60-min list-mode data, we generated a total of 100 data bins with 10-s duration. We also generated 40-s-long data by adding four non-overlapping 10-s bins and 300-s-long reference data by adding all list-mode data. We employed U-Net that is widely used for various tasks in biomedical imaging to train and test proposed denoising models. RESULTS: All the N2C, N2N, and Nr2N were effective for improving the noisy inputs. While N2N showed equivalent PSNR to the N2C in all the noise levels, Nr2N yielded higher SSIM than N2N. N2N yielded denoised images similar to reference image with Gaussian filtering regardless of input noise level. Image contrast was better in the N2N results. CONCLUSION: The self-supervised denoising method will be useful for reducing the PET scan time or radiation dose.

18.
Phys Med ; 72: 60-72, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32200299

RESUMO

In positron emission tomography (PET) studies, the voxel-wise calculation of individual rate constants describing the tracer kinetics is quite challenging because of the nonlinear relationship between the rate constants and PET data and the high noise level in voxel data. Based on preliminary simulations using a standard two-tissue compartment model, we can hypothesize that it is possible to reduce errors in the rate constant estimates when constraining the overestimation of the larger of two exponents in the model equation. We thus propose a novel approach based on infinity-norm regularization for limiting this exponent. Owing to the non-smooth cost function of this regularization scheme, which prevents the use of conventional Jacobian-based optimization methods, we examined a proximal gradient algorithm and the particle swarm optimization (PSO) through a simulation study. Because it exploits multiple initial values, the PSO method shows much better convergence than the proximal gradient algorithm, which is susceptible to the initial values. In the implementation of PSO, the use of a Gamma distribution to govern random movements was shown to improve the convergence rate and stability compared to a uniform distribution. Consequently, Gamma-based PSO with regularization was shown to outperform all other methods tested, including the conventional basis function method and Levenberg-Marquardt algorithm, in terms of its statistical properties.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Dinâmica não Linear , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Animais , Fluordesoxiglucose F18 , Cinética , Masculino , Camundongos , Camundongos Endogâmicos C57BL
19.
J Nucl Med ; 60(8): 1183-1189, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-30683763

RESUMO

We propose a new deep learning-based approach to provide more accurate whole-body PET/MRI attenuation correction than is possible with the Dixon-based 4-segment method. We use activity and attenuation maps estimated using the maximum-likelihood reconstruction of activity and attenuation (MLAA) algorithm as inputs to a convolutional neural network (CNN) to learn a CT-derived attenuation map. Methods: The whole-body 18F-FDG PET/CT scan data of 100 cancer patients (38 men and 62 women; age, 57.3 ± 14.1 y) were retrospectively used for training and testing the CNN. A modified U-net was trained to predict a CT-derived µ-map (µ-CT) from the MLAA-generated activity distribution (λ-MLAA) and µ-map (µ-MLAA). We used 1.3 million patches derived from 60 patients' data for training the CNN, data of 20 others were used as a validation set to prevent overfitting, and the data of the other 20 were used as a test set for the CNN performance analysis. The attenuation maps generated using the proposed method (µ-CNN), µ-MLAA, and 4-segment method (µ-segment) were compared with the µ-CT, a ground truth. We also compared the voxelwise correlation between the activity images reconstructed using ordered-subset expectation maximization with the µ-maps, and the SUVs of primary and metastatic bone lesions obtained by drawing regions of interest on the activity images. Results: The CNN generates less noisy attenuation maps and achieves better bone identification than MLAA. The average Dice similarity coefficient for bone regions between µ-CNN and µ-CT was 0.77, which was significantly higher than that between µ-MLAA and µ-CT (0.36). Also, the CNN result showed the best pixel-by-pixel correlation with the CT-based results and remarkably reduced differences in activity maps in comparison to CT-based attenuation correction. Conclusion: The proposed deep neural network produced a more reliable attenuation map for 511-keV photons than the 4-segment method currently used in whole-body PET/MRI studies.


Assuntos
Mapeamento Encefálico , Fluordesoxiglucose F18/farmacologia , Imageamento por Ressonância Magnética , Neoplasias/diagnóstico por imagem , Tomografia por Emissão de Pósitrons , Adulto , Idoso , Algoritmos , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Masculino , Metais , Pessoa de Meia-Idade , Imagem Multimodal , Redes Neurais de Computação , Reprodutibilidade dos Testes , Estudos Retrospectivos , Imagem Corporal Total
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA