Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 77
Filtrar
Mais filtros

Bases de dados
Tipo de documento
Intervalo de ano de publicação
1.
Eur J Nucl Med Mol Imaging ; 50(7): 1881-1896, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36808000

RESUMO

PURPOSE: Partial volume effect (PVE) is a consequence of the limited spatial resolution of PET scanners. PVE can cause the intensity values of a particular voxel to be underestimated or overestimated due to the effect of surrounding tracer uptake. We propose a novel partial volume correction (PVC) technique to overcome the adverse effects of PVE on PET images. METHODS: Two hundred and twelve clinical brain PET scans, including 50 18F-Fluorodeoxyglucose (18F-FDG), 50 18F-Flortaucipir, 36 18F-Flutemetamol, and 76 18F-FluoroDOPA, and their corresponding T1-weighted MR images were enrolled in this study. The Iterative Yang technique was used for PVC as a reference or surrogate of the ground truth for evaluation. A cycle-consistent adversarial network (CycleGAN) was trained to directly map non-PVC PET images to PVC PET images. Quantitative analysis using various metrics, including structural similarity index (SSIM), root mean squared error (RMSE), and peak signal-to-noise ratio (PSNR), was performed. Furthermore, voxel-wise and region-wise-based correlations of activity concentration between the predicted and reference images were evaluated through joint histogram and Bland and Altman analysis. In addition, radiomic analysis was performed by calculating 20 radiomic features within 83 brain regions. Finally, a voxel-wise two-sample t-test was used to compare the predicted PVC PET images with reference PVC images for each radiotracer. RESULTS: The Bland and Altman analysis showed the largest and smallest variance for 18F-FDG (95% CI: - 0.29, + 0.33 SUV, mean = 0.02 SUV) and 18F-Flutemetamol (95% CI: - 0.26, + 0.24 SUV, mean = - 0.01 SUV), respectively. The PSNR was lowest (29.64 ± 1.13 dB) for 18F-FDG and highest (36.01 ± 3.26 dB) for 18F-Flutemetamol. The smallest and largest SSIM were achieved for 18F-FDG (0.93 ± 0.01) and 18F-Flutemetamol (0.97 ± 0.01), respectively. The average relative error for the kurtosis radiomic feature was 3.32%, 9.39%, 4.17%, and 4.55%, while it was 4.74%, 8.80%, 7.27%, and 6.81% for NGLDM_contrast feature for 18F-Flutemetamol, 18F-FluoroDOPA, 18F-FDG, and 18F-Flortaucipir, respectively. CONCLUSION: An end-to-end CycleGAN PVC method was developed and evaluated. Our model generates PVC images from the original non-PVC PET images without requiring additional anatomical information, such as MRI or CT. Our model eliminates the need for accurate registration or segmentation or PET scanner system response characterization. In addition, no assumptions regarding anatomical structure size, homogeneity, boundary, or background level are required.


Assuntos
Compostos de Anilina , Fluordesoxiglucose F18 , Humanos , Tomografia por Emissão de Pósitrons/métodos , Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
2.
Eur J Nucl Med Mol Imaging ; 50(4): 1034-1050, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36508026

RESUMO

PURPOSE: Attenuation correction and scatter compensation (AC/SC) are two main steps toward quantitative PET imaging, which remain challenging in PET-only and PET/MRI systems. These can be effectively tackled via deep learning (DL) methods. However, trustworthy, and generalizable DL models commonly require well-curated, heterogeneous, and large datasets from multiple clinical centers. At the same time, owing to legal/ethical issues and privacy concerns, forming a large collective, centralized dataset poses significant challenges. In this work, we aimed to develop a DL-based model in a multicenter setting without direct sharing of data using federated learning (FL) for AC/SC of PET images. METHODS: Non-attenuation/scatter corrected and CT-based attenuation/scatter corrected (CT-ASC) 18F-FDG PET images of 300 patients were enrolled in this study. The dataset consisted of 6 different centers, each with 50 patients, with scanner, image acquisition, and reconstruction protocols varying across the centers. CT-based ASC PET images served as the standard reference. All images were reviewed to include high-quality and artifact-free PET images. Both corrected and uncorrected PET images were converted to standardized uptake values (SUVs). We used a modified nested U-Net utilizing residual U-block in a U-shape architecture. We evaluated two FL models, namely sequential (FL-SQ) and parallel (FL-PL) and compared their performance with the baseline centralized (CZ) learning model wherein the data were pooled to one server, as well as center-based (CB) models where for each center the model was built and evaluated separately. Data from each center were divided to contribute to training (30 patients), validation (10 patients), and test sets (10 patients). Final evaluations and reports were performed on 60 patients (10 patients from each center). RESULTS: In terms of percent SUV absolute relative error (ARE%), both FL-SQ (CI:12.21-14.81%) and FL-PL (CI:11.82-13.84%) models demonstrated excellent agreement with the centralized framework (CI:10.32-12.00%), while FL-based algorithms improved model performance by over 11% compared to CB training strategy (CI: 22.34-26.10%). Furthermore, the Mann-Whitney test between different strategies revealed no significant differences between CZ and FL-based algorithms (p-value > 0.05) in center-categorized mode. At the same time, a significant difference was observed between the different training approaches on the overall dataset (p-value < 0.05). In addition, voxel-wise comparison, with respect to reference CT-ASC, exhibited similar performance for images predicted by CZ (R2 = 0.94), FL-SQ (R2 = 0.93), and FL-PL (R2 = 0.92), while CB model achieved a far lower coefficient of determination (R2 = 0.74). Despite the strong correlations between CZ and FL-based methods compared to reference CT-ASC, a slight underestimation of predicted voxel values was observed. CONCLUSION: Deep learning-based models provide promising results toward quantitative PET image reconstruction. Specifically, we developed two FL models and compared their performance with center-based and centralized models. The proposed FL-based models achieved higher performance compared to center-based models, comparable with centralized models. Our work provided strong empirical evidence that the FL framework can fully benefit from the generalizability and robustness of DL models used for AC/SC in PET, while obviating the need for the direct sharing of datasets between clinical imaging centers.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Tomografia por Emissão de Pósitrons/métodos , Imageamento por Ressonância Magnética/métodos
3.
Eur Radiol ; 33(5): 3243-3252, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36703015

RESUMO

OBJECTIVES: This study aimed to improve patient positioning accuracy by relying on a CT localizer and a deep neural network to optimize image quality and radiation dose. METHODS: We included 5754 chest CT axial and anterior-posterior (AP) images from two different centers, C1 and C2. After pre-processing, images were split into training (80%) and test (20%) datasets. A deep neural network was trained to generate 3D axial images from the AP localizer. The geometric centerlines of patient bodies were indicated by creating a bounding box on the predicted images. The distance between the body centerline, estimated by the deep learning model and ground truth (BCAP), was compared with patient mis-centering during manual positioning (BCMP). We evaluated the performance of our model in terms of distance between the lung centerline estimated by the deep learning model and the ground truth (LCAP). RESULTS: The error in terms of BCAP was - 0.75 ± 7.73 mm and 2.06 ± 10.61 mm for C1 and C2, respectively. This error was significantly lower than BCMP, which achieved an error of 9.35 ± 14.94 and 13.98 ± 14.5 mm for C1 and C2, respectively. The absolute BCAP was 5.7 ± 5.26 and 8.26 ± 6.96 mm for C1 and C2, respectively. The LCAP metric was 1.56 ± 10.8 and -0.27 ± 16.29 mm for C1 and C2, respectively. The error in terms of BCAP and LCAP was higher for larger patients (p value < 0.01). CONCLUSION: The accuracy of the proposed method was comparable to available alternative methods, carrying the advantage of being free from errors related to objects blocking the camera visibility. KEY POINTS: • Patient mis-centering in the anterior-posterior direction (AP) is a common problem in clinical practice which can degrade image quality and increase patient radiation dose. • We proposed a deep neural network for automatic patient positioning using only the CT image localizer, achieving a performance comparable to alternative techniques, such as the external 3D visual camera. • The advantage of the proposed method is that it is free from errors related to objects blocking the camera visibility and that it could be implemented on imaging consoles as a patient positioning support tool.


Assuntos
Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Imageamento Tridimensional , Posicionamento do Paciente/métodos , Processamento de Imagem Assistida por Computador/métodos
4.
J Digit Imaging ; 36(4): 1588-1596, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-36988836

RESUMO

The existing deep learning-based denoising methods predicting standard-dose PET images (S-PET) from the low-dose versions (L-PET) solely rely on a single-dose level of PET images as the input of deep learning network. In this work, we exploited the prior knowledge in the form of multiple low-dose levels of PET images to estimate the S-PET images. To this end, a high-resolution ResNet architecture was utilized to predict S-PET images from 6 to 4% L-PET images. For the 6% L-PET imaging, two models were developed; the first and second models were trained using a single input of 6% L-PET and three inputs of 6%, 4%, and 2% L-PET as input to predict S-PET images, respectively. Similarly, for 4% L-PET imaging, a model was trained using a single input of 4% low-dose data, and a three-channel model was developed getting 4%, 3%, and 2% L-PET images. The performance of the four models was evaluated using structural similarity index (SSI), peak signal-to-noise ratio (PSNR), and root mean square error (RMSE) within the entire head regions and malignant lesions. The 4% multi-input model led to improved SSI and PSNR and a significant decrease in RMSE by 22.22% and 25.42% within the entire head region and malignant lesions, respectively. Furthermore, the 4% multi-input network remarkably decreased the lesions' SUVmean bias and SUVmax bias by 64.58% and 37.12% comparing to single-input network. In addition, the 6% multi-input network decreased the RMSE within the entire head region, within the lesions, lesions' SUVmean bias, and SUVmax bias by 37.5%, 39.58%, 86.99%, and 45.60%, respectively. This study demonstrated the significant benefits of using prior knowledge in the form of multiple L-PET images to predict S-PET images.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Humanos , Tomografia por Emissão de Pósitrons/métodos , Razão Sinal-Ruído , Processamento de Imagem Assistida por Computador/métodos
5.
J Digit Imaging ; 36(2): 574-587, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36417026

RESUMO

In this study, an inter-fraction organ deformation simulation framework for the locally advanced cervical cancer (LACC), which considers the anatomical flexibility, rigidity, and motion within an image deformation, was proposed. Data included 57 CT scans (7202 2D slices) of patients with LACC randomly divided into the train (n = 42) and test (n = 15) datasets. In addition to CT images and the corresponding RT structure (bladder, cervix, and rectum), the bone was segmented, and the coaches were eliminated. The correlated stochastic field was simulated using the same size as the target image (used for deformation) to produce the general random deformation. The deformation field was optimized to have a maximum amplitude in the rectum region, a moderate amplitude in the bladder region, and an amplitude as minimum as possible within bony structures. The DIRNet is a convolutional neural network that consists of convolutional regressors, spatial transformation, as well as resampling blocks. It was implemented by different parameters. Mean Dice indices of 0.89 ± 0.02, 0.96 ± 0.01, and 0.93 ± 0.02 were obtained for the cervix, bladder, and rectum (defined as at-risk organs), respectively. Furthermore, a mean average symmetric surface distance of 1.61 ± 0.46 mm for the cervix, 1.17 ± 0.15 mm for the bladder, and 1.06 ± 0.42 mm for the rectum were achieved. In addition, a mean Jaccard of 0.86 ± 0.04 for the cervix, 0.93 ± 0.01 for the bladder, and 0.88 ± 0.04 for the rectum were observed on the test dataset (15 subjects). Deep learning-based non-rigid image registration is, therefore, proposed for the high-dose-rate brachytherapy in inter-fraction cervical cancer since it outperformed conventional algorithms.


Assuntos
Braquiterapia , Aprendizado Profundo , Neoplasias do Colo do Útero , Feminino , Humanos , Braquiterapia/métodos , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador/métodos , Reto , Neoplasias do Colo do Útero/diagnóstico por imagem , Neoplasias do Colo do Útero/radioterapia
6.
Hum Brain Mapp ; 43(16): 5032-5043, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36087092

RESUMO

We aim to synthesize brain time-of-flight (TOF) PET images/sinograms from their corresponding non-TOF information in the image space (IS) and sinogram space (SS) to increase the signal-to-noise ratio (SNR) and contrast of abnormalities, and decrease the bias in tracer uptake quantification. One hundred forty clinical brain 18 F-FDG PET/CT scans were collected to generate TOF and non-TOF sinograms. The TOF sinograms were split into seven time bins (0, ±1, ±2, ±3). The predicted TOF sinogram was reconstructed and the performance of both models (IS and SS) compared with reference TOF and non-TOF. Wide-ranging quantitative and statistical analysis metrics, including structural similarity index metric (SSIM), root mean square error (RMSE), as well as 28 radiomic features for 83 brain regions were extracted to evaluate the performance of the CycleGAN model. SSIM and RMSE of 0.99 ± 0.03, 0.98 ± 0.02 and 0.12 ± 0.09, 0.16 ± 0.04 were achieved for the generated TOF-PET images in IS and SS, respectively. They were 0.97 ± 0.03 and 0.22 ± 0.12, respectively, for non-TOF-PET images. The Bland & Altman analysis revealed that the lowest tracer uptake value bias (-0.02%) and minimum variance (95% CI: -0.17%, +0.21%) were achieved for TOF-PET images generated in IS. For malignant lesions, the contrast in the test dataset was enhanced from 3.22 ± 2.51 for non-TOF to 3.34 ± 0.41 and 3.65 ± 3.10 for TOF PET in SS and IS, respectively. The implemented CycleGAN is capable of generating TOF from non-TOF PET images to achieve better image quality.


Assuntos
Aprendizado Profundo , Fluordesoxiglucose F18 , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Encéfalo/diagnóstico por imagem
7.
Magn Reson Med ; 87(2): 686-701, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34480771

RESUMO

PURPOSE: We compare the performance of three commonly used MRI-guided attenuation correction approaches in torso PET/MRI, namely segmentation-, atlas-, and deep learning-based algorithms. METHODS: Twenty-five co-registered torso 18 F-FDG PET/CT and PET/MR images were enrolled. PET attenuation maps were generated from in-phase Dixon MRI using a three-tissue class segmentation-based approach (soft-tissue, lung, and background air), voxel-wise weighting atlas-based approach, and a residual convolutional neural network. The bias in standardized uptake value (SUV) was calculated for each approach considering CT-based attenuation corrected PET images as reference. In addition to the overall performance assessment of these approaches, the primary focus of this work was on recognizing the origins of potential outliers, notably body truncation, metal-artifacts, abnormal anatomy, and small malignant lesions in the lungs. RESULTS: The deep learning approach outperformed both atlas- and segmentation-based methods resulting in less than 4% SUV bias across 25 patients compared to the segmentation-based method with up to 20% SUV bias in bony structures and the atlas-based method with 9% bias in the lung. The deep learning-based method exhibited superior performance. Yet, in case of sever truncation and metallic-artifacts in the input MRI, this approach was outperformed by the atlas-based method, exhibiting suboptimal performance in the affected regions. Conversely, for abnormal anatomies, such as a patient presenting with one lung or small malignant lesion in the lung, the deep learning algorithm exhibited promising performance compared to other methods. CONCLUSION: The deep learning-based method provides promising outcome for synthetic CT generation from MRI. However, metal-artifact and body truncation should be specifically addressed.


Assuntos
Aprendizado Profundo , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Tomografia por Emissão de Pósitrons , Tronco
8.
Eur J Nucl Med Mol Imaging ; 49(12): 4048-4063, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35716176

RESUMO

PURPOSE: This study proposed and investigated the feasibility of estimating Patlak-derived influx rate constant (Ki) from standardized uptake value (SUV) and/or dynamic PET image series. METHODS: Whole-body 18F-FDG dynamic PET images of 19 subjects consisting of 13 frames or passes were employed for training a residual deep learning model with SUV and/or dynamic series as input and Ki-Patlak (slope) images as output. The training and evaluation were performed using a nine-fold cross-validation scheme. Owing to the availability of SUV images acquired 60 min post-injection (20 min total acquisition time), the data sets used for the training of the models were split into two groups: "With SUV" and "Without SUV." For "With SUV" group, the model was first trained using only SUV images and then the passes (starting from pass 13, the last pass, to pass 9) were added to the training of the model (one pass each time). For this group, 6 models were developed with input data consisting of SUV, SUV plus pass 13, SUV plus passes 13 and 12, SUV plus passes 13 to 11, SUV plus passes 13 to 10, and SUV plus passes 13 to 9. For the "Without SUV" group, the same trend was followed, but without using the SUV images (5 models were developed with input data of passes 13 to 9). For model performance evaluation, the mean absolute error (MAE), mean error (ME), mean relative absolute error (MRAE%), relative error (RE%), mean squared error (MSE), root mean squared error (RMSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM) were calculated between the predicted Ki-Patlak images by the two groups and the reference Ki-Patlak images generated through Patlak analysis using the whole acquired data sets. For specific evaluation of the method, regions of interest (ROIs) were drawn on representative organs, including the lung, liver, brain, and heart and around the identified malignant lesions. RESULTS: The MRAE%, RE%, PSNR, and SSIM indices across all patients were estimated as 7.45 ± 0.94%, 4.54 ± 2.93%, 46.89 ± 2.93, and 1.00 ± 6.7 × 10-7, respectively, for models predicted using SUV plus passes 13 to 9 as input. The predicted parameters using passes 13 to 11 as input exhibited almost similar results compared to the predicted models using SUV plus passes 13 to 9 as input. Yet, the bias was continuously reduced by adding passes until pass 11, after which the magnitude of error reduction was negligible. Hence, the predicted model with SUV plus passes 13 to 9 had the lowest quantification bias. Lesions invisible in one or both of SUV and Ki-Patlak images appeared similarly through visual inspection in the predicted images with tolerable bias. CONCLUSION: This study concluded the feasibility of direct deep learning-based approach to estimate Ki-Patlak parametric maps without requiring the input function and with a fewer number of passes. This would lead to shorter acquisition times for WB dynamic imaging with acceptable bias and comparable lesion detectability performance.


Assuntos
Fluordesoxiglucose F18 , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Tomografia por Emissão de Pósitrons/métodos , Imagem Corporal Total/métodos
9.
Eur J Nucl Med Mol Imaging ; 49(5): 1508-1522, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-34778929

RESUMO

PURPOSE: This work was set out to investigate the feasibility of dose reduction in SPECT myocardial perfusion imaging (MPI) without sacrificing diagnostic accuracy. A deep learning approach was proposed to synthesize full-dose images from the corresponding low-dose images at different dose reduction levels in the projection space. METHODS: Clinical SPECT-MPI images of 345 patients acquired on a dedicated cardiac SPECT camera in list-mode format were retrospectively employed to predict standard-dose from low-dose images at half-, quarter-, and one-eighth-dose levels. To simulate realistic low-dose projections, 50%, 25%, and 12.5% of the events were randomly selected from the list-mode data through applying binomial subsampling. A generative adversarial network was implemented to predict non-gated standard-dose SPECT images in the projection space at the different dose reduction levels. Well-established metrics, including peak signal-to-noise ratio (PSNR), root mean square error (RMSE), and structural similarity index metrics (SSIM) in addition to Pearson correlation coefficient analysis and clinical parameters derived from Cedars-Sinai software were used to quantitatively assess the predicted standard-dose images. For clinical evaluation, the quality of the predicted standard-dose images was evaluated by a nuclear medicine specialist using a seven-point (- 3 to + 3) grading scheme. RESULTS: The highest PSNR (42.49 ± 2.37) and SSIM (0.99 ± 0.01) and the lowest RMSE (1.99 ± 0.63) were achieved at a half-dose level. Pearson correlation coefficients were 0.997 ± 0.001, 0.994 ± 0.003, and 0.987 ± 0.004 for the predicted standard-dose images at half-, quarter-, and one-eighth-dose levels, respectively. Using the standard-dose images as reference, the Bland-Altman plots sketched for the Cedars-Sinai selected parameters exhibited remarkably less bias and variance in the predicted standard-dose images compared with the low-dose images at all reduced dose levels. Overall, considering the clinical assessment performed by a nuclear medicine specialist, 100%, 80%, and 11% of the predicted standard-dose images were clinically acceptable at half-, quarter-, and one-eighth-dose levels, respectively. CONCLUSION: The noise was effectively suppressed by the proposed network, and the predicted standard-dose images were comparable to reference standard-dose images at half- and quarter-dose levels. However, recovery of the underlying signals/information in low-dose images beyond a quarter of the standard dose would not be feasible (due to very poor signal-to-noise ratio) which will adversely affect the clinical interpretation of the resulting images.


Assuntos
Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador/métodos , Perfusão , Estudos Retrospectivos , Razão Sinal-Ruído , Tomografia Computadorizada de Emissão de Fóton Único
10.
Ultrason Imaging ; 44(1): 25-38, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34986724

RESUMO

U-Net based algorithms, due to their complex computations, include limitations when they are used in clinical devices. In this paper, we addressed this problem through a novel U-Net based architecture that called fast and accurate U-Net for medical image segmentation task. The proposed fast and accurate U-Net model contains four tuned 2D-convolutional, 2D-transposed convolutional, and batch normalization layers as its main layers. There are four blocks in the encoder-decoder path. The results of our proposed architecture were evaluated using a prepared dataset for head circumference and abdominal circumference segmentation tasks, and a public dataset (HC18-Grand challenge dataset) for fetal head circumference measurement. The proposed fast network significantly improved the processing time in comparison with U-Net, dilated U-Net, R2U-Net, attention U-Net, and MFP U-Net. It took 0.47 seconds for segmenting a fetal abdominal image. In addition, over the prepared dataset using the proposed accurate model, Dice and Jaccard coefficients were 97.62% and 95.43% for fetal head segmentation, 95.07%, and 91.99% for fetal abdominal segmentation. Moreover, we have obtained the Dice and Jaccard coefficients of 97.45% and 95.00% using the public HC18-Grand challenge dataset. Based on the obtained results, we have concluded that a fine-tuned and a simple well-structured model used in clinical devices can outperform complex models.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Algoritmos , Processamento de Imagem Assistida por Computador/métodos
11.
J Digit Imaging ; 35(3): 469-481, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35137305

RESUMO

A small dataset commonly affects generalization, robustness, and overall performance of deep neural networks (DNNs) in medical imaging research. Since gathering large clinical databases is always difficult, we proposed an analytical method for producing a large realistic/diverse dataset. Clinical brain PET/CT/MR images including full-dose (FD), low-dose (LD) corresponding to only 5 % of events acquired in the FD scan, non-attenuated correction (NAC) and CT-based measured attenuation correction (MAC) PET images, CT images and T1 and T2 MR sequences of 35 patients were included. All images were registered to the Montreal Neurological Institute (MNI) template. Laplacian blending was used to make a natural presentation using information in the frequency domain of images from two separate patients, as well as the blending mask. This classical technique from the computer vision and image processing communities is still widely used and unlike modern DNNs, does not require the availability of training data. A modified ResNet DNN was implemented to evaluate four image-to-image translation tasks, including LD to FD, LD+MR to FD, NAC to MAC, and MRI to CT, with and without using the synthesized images. Quantitative analysis using established metrics, including the peak signal-to-noise ratio (PSNR), structural similarity index metric (SSIM), and joint histogram analysis was performed for quantitative evaluation. The quantitative comparison between the registered small dataset containing 35 patients and the large dataset containing 350 synthesized plus 35 real dataset demonstrated improvement of the RMSE and SSIM by 29% and 8% for LD to FD, 40% and 7% for LD+MRI to FD, 16% and 8% for NAC to MAC, and 24% and 11% for MRI to CT mapping task, respectively. The qualitative/quantitative analysis demonstrated that the proposed model improved the performance of all four DNN models through producing images of higher quality and lower quantitative bias and variance compared to reference images.


Assuntos
Aprendizado Profundo , Encéfalo/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Neuroimagem/métodos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada
12.
Neuroimage ; 245: 118697, 2021 12 15.
Artigo em Inglês | MEDLINE | ID: mdl-34742941

RESUMO

PURPOSE: Reducing the injected activity and/or the scanning time is a desirable goal to minimize radiation exposure and maximize patients' comfort. To achieve this goal, we developed a deep neural network (DNN) model for synthesizing full-dose (FD) time-of-flight (TOF) bin sinograms from their corresponding fast/low-dose (LD) TOF bin sinograms. METHODS: Clinical brain PET/CT raw data of 140 normal and abnormal patients were employed to create LD and FD TOF bin sinograms. The LD TOF sinograms were created through 5% undersampling of FD list-mode PET data. The TOF sinograms were split into seven time bins (0, ±1, ±2, ±3). Residual network (ResNet) algorithms were trained separately to generate FD bins from LD bins. An extra ResNet model was trained to synthesize FD images from LD images to compare the performance of DNN in sinogram space (SS) vs implementation in image space (IS). Comprehensive quantitative and statistical analysis was performed to assess the performance of the proposed model using established quantitative metrics, including the peak signal-to-noise ratio (PSNR), structural similarity index metric (SSIM) region-wise standardized uptake value (SUV) bias and statistical analysis for 83 brain regions. RESULTS: SSIM and PSNR values of 0.97 ± 0.01, 0.98 ± 0.01 and 33.70 ± 0.32, 39.36 ± 0.21 were obtained for IS and SS, respectively, compared to 0.86 ± 0.02and 31.12 ± 0.22 for reference LD images. The absolute average SUV bias was 0.96 ± 0.95% and 1.40 ± 0.72% for SS and IS implementations, respectively. The joint histogram analysis revealed the lowest mean square error (MSE) and highest correlation (R2 = 0.99, MSE = 0.019) was achieved by SS compared to IS (R2 = 0.97, MSE= 0.028). The Bland & Altman analysis showed that the lowest SUV bias (-0.4%) and minimum variance (95% CI: -2.6%, +1.9%) were achieved by SS images. The voxel-wise t-test analysis revealed the presence of voxels with statistically significantly lower values in LD, IS, and SS images compared to FD images respectively. CONCLUSION: The results demonstrated that images reconstructed from the predicted TOF FD sinograms using the SS approach led to higher image quality and lower bias compared to images predicted from LD images.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Doenças Neurodegenerativas/diagnóstico por imagem , Neuroimagem/métodos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Idoso , Bases de Dados Factuais , Feminino , Humanos , Masculino , Razão Sinal-Ruído
13.
Eur J Nucl Med Mol Imaging ; 48(3): 670-682, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-32875430

RESUMO

PURPOSE: In the era of precision medicine, patient-specific dose calculation using Monte Carlo (MC) simulations is deemed the gold standard technique for risk-benefit analysis of radiation hazards and correlation with patient outcome. Hence, we propose a novel method to perform whole-body personalized organ-level dosimetry taking into account the heterogeneity of activity distribution, non-uniformity of surrounding medium, and patient-specific anatomy using deep learning algorithms. METHODS: We extended the voxel-scale MIRD approach from single S-value kernel to specific S-value kernels corresponding to patient-specific anatomy to construct 3D dose maps using hybrid emission/transmission image sets. In this context, we employed a Deep Neural Network (DNN) to predict the distribution of deposited energy, representing specific S-values, from a single source in the center of a 3D kernel composed of human body geometry. The training dataset consists of density maps obtained from CT images and the reference voxelwise S-values generated using Monte Carlo simulations. Accordingly, specific S-value kernels are inferred from the trained model and whole-body dose maps constructed in a manner analogous to the voxel-based MIRD formalism, i.e., convolving specific voxel S-values with the activity map. The dose map predicted using the DNN was compared with the reference generated using MC simulations and two MIRD-based methods, including Single and Multiple S-Values (SSV and MSV) and Olinda/EXM software package. RESULTS: The predicted specific voxel S-value kernels exhibited good agreement with the MC-based kernels serving as reference with a mean relative absolute error (MRAE) of 4.5 ± 1.8 (%). Bland and Altman analysis showed the lowest dose bias (2.6%) and smallest variance (CI: - 6.6, + 1.3) for DNN. The MRAE of estimated absorbed dose between DNN, MSV, and SSV with respect to the MC simulation reference were 2.6%, 3%, and 49%, respectively. In organ-level dosimetry, the MRAE between the proposed method and MSV, SSV, and Olinda/EXM were 5.1%, 21.8%, and 23.5%, respectively. CONCLUSION: The proposed DNN-based WB internal dosimetry exhibited comparable performance to the direct Monte Carlo approach while overcoming the limitations of conventional dosimetry techniques in nuclear medicine.


Assuntos
Aprendizado Profundo , Corpo Humano , Simulação por Computador , Humanos , Método de Monte Carlo , Imagens de Fantasmas , Radiometria
14.
Eur J Nucl Med Mol Imaging ; 48(8): 2405-2415, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33495927

RESUMO

PURPOSE: Tendency is to moderate the injected activity and/or reduce acquisition time in PET examinations to minimize potential radiation hazards and increase patient comfort. This work aims to assess the performance of regular full-dose (FD) synthesis from fast/low-dose (LD) whole-body (WB) PET images using deep learning techniques. METHODS: Instead of using synthetic LD scans, two separate clinical WB 18F-Fluorodeoxyglucose (18F-FDG) PET/CT studies of 100 patients were acquired: one regular FD (~ 27 min) and one fast or LD (~ 3 min) consisting of 1/8th of the standard acquisition time. A modified cycle-consistent generative adversarial network (CycleGAN) and residual neural network (ResNET) models, denoted as CGAN and RNET, respectively, were implemented to predict FD PET images. The quality of the predicted PET images was assessed by two nuclear medicine physicians. Moreover, the diagnostic quality of the predicted PET images was evaluated using a pass/fail scheme for lesion detectability task. Quantitative analysis using established metrics including standardized uptake value (SUV) bias was performed for the liver, left/right lung, brain, and 400 malignant lesions from the test and evaluation datasets. RESULTS: CGAN scored 4.92 and 3.88 (out of 5) (adequate to good) for brain and neck + trunk, respectively. The average SUV bias calculated over normal tissues was 3.39 ± 0.71% and - 3.83 ± 1.25% for CGAN and RNET, respectively. Bland-Altman analysis reported the lowest SUV bias (0.01%) and 95% confidence interval of - 0.36, + 0.47 for CGAN compared with the reference FD images for malignant lesions. CONCLUSION: CycleGAN is able to synthesize clinical FD WB PET images from LD images with 1/8th of standard injected activity or acquisition time. The predicted FD images present almost similar performance in terms of lesion detectability, qualitative scores, and quantification bias and variance.


Assuntos
Aprendizado Profundo , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Fluordesoxiglucose F18 , Humanos , Tomografia por Emissão de Pósitrons , Tomografia Computadorizada por Raios X
15.
Eur Radiol ; 31(8): 6384-6396, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-33569626

RESUMO

OBJECTIVES: The susceptibility of CT imaging to metallic objects gives rise to strong streak artefacts and skewed information about the attenuation medium around the metallic implants. This metal-induced artefact in CT images leads to inaccurate attenuation correction in PET/CT imaging. This study investigates the potential of deep learning-based metal artefact reduction (MAR) in quantitative PET/CT imaging. METHODS: Deep learning-based metal artefact reduction approaches were implemented in the image (DLI-MAR) and projection (DLP-MAR) domains. The proposed algorithms were quantitatively compared to the normalized MAR (NMAR) method using simulated and clinical studies. Eighty metal-free CT images were employed for simulation of metal artefact as well as training and evaluation of the aforementioned MAR approaches. Thirty 18F-FDG PET/CT images affected by the presence of metallic implants were retrospectively employed for clinical assessment of the MAR techniques. RESULTS: The evaluation of MAR techniques on the simulation dataset demonstrated the superior performance of the DLI-MAR approach (structural similarity (SSIM) = 0.95 ± 0.2 compared to 0.94 ± 0.2 and 0.93 ± 0.3 obtained using DLP-MAR and NMAR, respectively) in minimizing metal artefacts in CT images. The presence of metallic artefacts in CT images or PET attenuation correction maps led to quantitative bias, image artefacts and under- and overestimation of scatter correction of PET images. The DLI-MAR technique led to a quantitative PET bias of 1.3 ± 3% compared to 10.5 ± 6% without MAR and 3.2 ± 0.5% achieved by NMAR. CONCLUSION: The DLI-MAR technique was able to reduce the adverse effects of metal artefacts on PET images through the generation of accurate attenuation maps from corrupted CT images. KEY POINTS: • The presence of metallic objects, such as dental implants, gives rise to severe photon starvation, beam hardening and scattering, thus leading to adverse artefacts in reconstructed CT images. • The aim of this work is to develop and evaluate a deep learning-based MAR to improve CT-based attenuation and scatter correction in PET/CT imaging. • Deep learning-based MAR in the image (DLI-MAR) domain outperformed its counterpart implemented in the projection (DLP-MAR) domain. The DLI-MAR approach minimized the adverse impact of metal artefacts on whole-body PET images through generating accurate attenuation maps from corrupted CT images.


Assuntos
Artefatos , Aprendizado Profundo , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Tomografia por Emissão de Pósitrons , Estudos Retrospectivos , Tomografia Computadorizada por Raios X
16.
Eur Radiol ; 31(3): 1420-1431, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-32879987

RESUMO

OBJECTIVES: The current study aimed to design an ultra-low-dose CT examination protocol using a deep learning approach suitable for clinical diagnosis of COVID-19 patients. METHODS: In this study, 800, 170, and 171 pairs of ultra-low-dose and full-dose CT images were used as input/output as training, test, and external validation set, respectively, to implement the full-dose prediction technique. A residual convolutional neural network was applied to generate full-dose from ultra-low-dose CT images. The quality of predicted CT images was assessed using root mean square error (RMSE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR). Scores ranging from 1 to 5 were assigned reflecting subjective assessment of image quality and related COVID-19 features, including ground glass opacities (GGO), crazy paving (CP), consolidation (CS), nodular infiltrates (NI), bronchovascular thickening (BVT), and pleural effusion (PE). RESULTS: The radiation dose in terms of CT dose index (CTDIvol) was reduced by up to 89%. The RMSE decreased from 0.16 ± 0.05 to 0.09 ± 0.02 and from 0.16 ± 0.06 to 0.08 ± 0.02 for the predicted compared with ultra-low-dose CT images in the test and external validation set, respectively. The overall scoring assigned by radiologists showed an acceptance rate of 4.72 ± 0.57 out of 5 for reference full-dose CT images, while ultra-low-dose CT images rated 2.78 ± 0.9. The predicted CT images using the deep learning algorithm achieved a score of 4.42 ± 0.8. CONCLUSIONS: The results demonstrated that the deep learning algorithm is capable of predicting standard full-dose CT images with acceptable quality for the clinical diagnosis of COVID-19 positive patients with substantial radiation dose reduction. KEY POINTS: • Ultra-low-dose CT imaging of COVID-19 patients would result in the loss of critical information about lesion types, which could potentially affect clinical diagnosis. • Deep learning-based prediction of full-dose from ultra-low-dose CT images for the diagnosis of COVID-19 could reduce the radiation dose by up to 89%. • Deep learning algorithms failed to recover the correct lesion structure/density for a number of patients considered outliers, and as such, further research and development is warranted to address these limitations.


Assuntos
COVID-19/diagnóstico por imagem , Aprendizado Profundo , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Doses de Radiação , Reprodutibilidade dos Testes , SARS-CoV-2 , Razão Sinal-Ruído
17.
J Nucl Cardiol ; 28(6): 2761-2779, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-32347527

RESUMO

INTRODUCTION: The purpose of this work was to assess the feasibility of acquisition time reduction in MPI-SPECT imaging using deep leering techniques through two main approaches, namely reduction of the acquisition time per projection and reduction of the number of angular projections. METHODS: SPECT imaging was performed using a fixed 90° angle dedicated dual-head cardiac SPECT camera. This study included a prospective cohort of 363 patients with various clinical indications (normal, ischemia, and infarct) referred for MPI-SPECT. For each patient, 32 projections for 20 seconds per projection were acquired using a step and shoot protocol from the right anterior oblique to the left posterior oblique view. SPECT projection data were reconstructed using the OSEM algorithm (6 iterations, 4 subsets, Butterworth post-reconstruction filter). For each patient, four different datasets were generated, namely full time (20 seconds) projections (FT), half-time (10 seconds) acquisition per projection (HT), 32 full projections (FP), and 16 half projections (HP). The image-to-image transformation via the residual network was implemented to predict FT from HT and predict FP from HP images in the projection domain. Qualitative and quantitative evaluations of the proposed framework was performed using a tenfold cross validation scheme using the root mean square error (RMSE), absolute relative error (ARE), structural similarity index, peak signal-to-noise ratio (PSNR) metrics, and clinical quantitative parameters. RESULTS: The results demonstrated that the predicted FT had better image quality than the predicted FP images. Among the generated images, predicted FT images resulted in the lowest error metrics (RMSE = 6.8 ± 2.7, ARE = 3.1 ± 1.1%) and highest similarity index and signal-to-noise ratio (SSIM = 0.97 ± 1.1, PSNR = 36.0 ± 1.4). The highest error metrics (RMSE = 32.8 ± 12.8, ARE = 16.2 ± 4.9%) and the lowest similarity and signal-to-noise ratio (SSIM = 0.93 ± 2.6, PSNR = 31.7 ± 2.9) were observed for HT images. The RMSE decreased significantly (P value < .05) for predicted FT (8.0 ± 3.6) relative to predicted FP (6.8 ± 2.7). CONCLUSION: Reducing the acquisition time per projection significantly increased the error metrics. The deep neural network effectively recovers image quality and reduces bias in quantification metrics. Further research should be undertaken to explore the impact of time reduction in gated MPI-SPECT.


Assuntos
Técnicas de Imagem Cardíaca/métodos , Circulação Coronária , Imagem de Perfusão do Miocárdio/métodos , Redes Neurais de Computação , Tomografia Computadorizada de Emissão de Fóton Único/métodos , Estudos de Viabilidade , Humanos , Estudos Prospectivos , Fatores de Tempo
18.
J Nucl Cardiol ; 28(6): 2730-2744, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-32333282

RESUMO

BACKGROUND: The aim of this work was to assess the robustness of cardiac SPECT radiomic features against changes in imaging settings, including acquisition, and reconstruction parameters. METHODS: Four commercial SPECT and SPECT/CT cameras were used to acquire images of a static cardiac phantom mimicking typical myorcardial perfusion imaging using 185 MBq of 99mTc. The effects of different image acquisition and reconstruction parameters, including number of views, view matrix size, attenuation correction, as well as image reconstruction related parameters (algorithm, number of iterations, number of subsets, type of post-reconstruction filter, and its associated parameters, including filter order and cut-off frequency) were studied. In total, 5,063 transverse views were reconstructed by varying the aforementioned factors. Eighty-seven radiomic features including first-, second-, and high-order textures were extracted from these images. To assess reproducibility and repeatability, the coefficient of variation (COV), as a widely adopted metric, was measured for each of the radiomic features over the different imaging settings. RESULTS: The Inverse Difference Moment Normalized (IDMN) and Inverse Difference Normalized (IDN) features from the Gray Level Co-occurrence Matrix (GLCM), Run Percentage (RP) from the Gray Level Co-occurrence Matrix (GLRLM), Zone Entropy (ZE) from the Gray Level Size Zone Matrix (GLSZM), and Dependence Entropy (DE) from the Gray Level Dependence Matrix (GLDM) feature sets were the only features that exhibited high reproducibility (COV ≤ 5%) against changes in all imaging settings. In addition, Large Area Low Gray Level Emphasis (LALGLE), Small Area Low Gray Level Emphasis (SALGLE) and Low Gray Level Zone Emphasis (LGLZE) from GLSZM, and Small Dependence Low Gray Level Emphasis (SDLGLE) from GLDM feature sets turned out to be less reproducible (COV > 20%) against changes in imaging settings. The GLRLM (31.88%) and GLDM feature set (54.2%) had the highest (COV < 5%) and lowest (COV > 20%) number of the reproducible features, respectively. Matrix size had the largest impact on feature variability as most of the features were not repeatable when matrix size was modified with 82.8% of them having a COV > 20%. CONCLUSION: The repeatability and reproducibility of SPECT/CT cardiac radiomic features under different imaging settings is feature-dependent. Different image acquisition and reconstruction protocols have variable effects on radiomic features. The radiomic features exhibiting low COV are potential candidates for future clinical studies.


Assuntos
Técnicas de Imagem Cardíaca/métodos , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Tomografia Computadorizada de Emissão de Fóton Único/métodos , Humanos , Reprodutibilidade dos Testes , Tomografia Computadorizada com Tomografia Computadorizada de Emissão de Fóton Único
19.
Hum Brain Mapp ; 41(13): 3667-3679, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32436261

RESUMO

PET attenuation correction (AC) on systems lacking CT/transmission scanning, such as dedicated brain PET scanners and hybrid PET/MRI, is challenging. Direct AC in image-space, wherein PET images corrected for attenuation and scatter are synthesized from nonattenuation corrected PET (PET-nonAC) images in an end-to-end fashion using deep learning approaches (DLAC) is evaluated for various radiotracers used in molecular neuroimaging studies. One hundred eighty brain PET scans acquired using 18 F-FDG, 18 F-DOPA, 18 F-Flortaucipir (targeting tau pathology), and 18 F-Flutemetamol (targeting amyloid pathology) radiotracers (40 + 5, training/validation + external test, subjects for each radiotracer) were included. The PET data were reconstructed using CT-based AC (CTAC) to generate reference PET-CTAC and without AC to produce PET-nonAC images. A deep convolutional neural network was trained to generate PET attenuation corrected images (PET-DLAC) from PET-nonAC. The quantitative accuracy of this approach was investigated separately for each radiotracer considering the values obtained from PET-CTAC images as reference. A segmented AC map (PET-SegAC) containing soft-tissue and background air was also included in the evaluation. Quantitative analysis of PET images demonstrated superior performance of the DLAC approach compared to SegAC technique for all tracers. Despite the relatively low quantitative bias observed when using the DLAC approach, this approach appears vulnerable to outliers, resulting in noticeable local pseudo uptake and false cold regions. Direct AC in image-space using deep learning demonstrated quantitatively acceptable performance with less than 9% absolute SUV bias for the four different investigated neuroimaging radiotracers. However, this approach is vulnerable to outliers which result in large local quantitative bias.


Assuntos
Compostos de Anilina , Benzotiazóis , Carbolinas , Disfunção Cognitiva/diagnóstico por imagem , Aprendizado Profundo , Di-Hidroxifenilalanina/análogos & derivados , Fluordesoxiglucose F18 , Neuroimagem , Tomografia por Emissão de Pósitrons , Compostos Radiofarmacêuticos , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Neuroimagem/normas , Tomografia por Emissão de Pósitrons/normas , Tomografia Computadorizada por Raios X , Adulto Jovem
20.
Eur J Nucl Med Mol Imaging ; 47(11): 2533-2548, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32415552

RESUMO

OBJECTIVE: We demonstrate the feasibility of direct generation of attenuation and scatter-corrected images from uncorrected images (PET-nonASC) using deep residual networks in whole-body 18F-FDG PET imaging. METHODS: Two- and three-dimensional deep residual networks using 2D successive slices (DL-2DS), 3D slices (DL-3DS) and 3D patches (DL-3DP) as input were constructed to perform joint attenuation and scatter correction on uncorrected whole-body images in an end-to-end fashion. We included 1150 clinical whole-body 18F-FDG PET/CT studies, among which 900, 100 and 150 patients were randomly partitioned into training, validation and independent validation sets, respectively. The images generated by the proposed approach were assessed using various evaluation metrics, including the root-mean-squared-error (RMSE) and absolute relative error (ARE %) using CT-based attenuation and scatter-corrected (CTAC) PET images as reference. PET image quantification variability was also assessed through voxel-wise standardized uptake value (SUV) bias calculation in different regions of the body (head, neck, chest, liver-lung, abdomen and pelvis). RESULTS: Our proposed attenuation and scatter correction (Deep-JASC) algorithm provided good image quality, comparable with those produced by CTAC. Across the 150 patients of the independent external validation set, the voxel-wise REs (%) were - 1.72 ± 4.22%, 3.75 ± 6.91% and - 3.08 ± 5.64 for DL-2DS, DL-3DS and DL-3DP, respectively. Overall, the DL-2DS approach led to superior performance compared with the other two 3D approaches. The brain and neck regions had the highest and lowest RMSE values between Deep-JASC and CTAC images, respectively. However, the largest ARE was observed in the chest (15.16 ± 3.96%) and liver/lung (11.18 ± 3.23%) regions for DL-2DS. DL-3DS and DL-3DP performed slightly better in the chest region, leading to AREs of 11.16 ± 3.42% and 11.69 ± 2.71%, respectively (p value < 0.05). The joint histogram analysis resulted in correlation coefficients of 0.985, 0.980 and 0.981 for DL-2DS, DL-3DS and DL-3DP approaches, respectively. CONCLUSION: This work demonstrated the feasibility of direct attenuation and scatter correction of whole-body 18F-FDG PET images using emission-only data via a deep residual network. The proposed approach achieved accurate attenuation and scatter correction without the need for anatomical images, such as CT and MRI. The technique is applicable in a clinical setting on standalone PET or PET/MRI systems. Nevertheless, Deep-JASC showing promising quantitative accuracy, vulnerability to noise was observed, leading to pseudo hot/cold spots and/or poor organ boundary definition in the resulting PET images.


Assuntos
Fluordesoxiglucose F18 , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Tomografia por Emissão de Pósitrons , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA