Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
Ann Nucl Med ; 2024 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-38717535

RESUMO

OBJECTIVE: In preclinical studies, high-throughput positron emission tomography (PET) imaging, known as simultaneous multiple animal scanning, can reduce the time spent on animal experiments, the cost of PET tracers, and the risk of synthesis of PET tracers. It is well known that the image quality acquired by high-throughput imaging depends on the PET system. Herein, we investigated the influence of large field of view (FOV) PET scanner on high-throughput imaging. METHODS: We investigated the influence of scanning four objects using a small animal PET scanner with a large FOV. We compared the image quality acquired by four objects scanned with the one acquired by one object scanned using phantoms and animals. We assessed the image quality with uniformity, recovery coefficient (RC), and spillover ratio (SOR), which are indicators of image noise, spatial resolution, and quantitative precision, respectively. For the phantom study, we used the NEMA NU 4-2008 image quality phantom and evaluated uniformity, RC, and SOR, and for the animal study, we used Wistar rats and evaluated the spillover in the heart and kidney. RESULTS: In the phantom study, four phantoms had little effect on imaging quality, especially SOR compared with that for one phantom. In the animal study as well, four rats had little effect on spillover from the heart muscle and kidney cortex compared with that for one rat. CONCLUSIONS: This study demonstrated that an animal PET scanner with a large FOV was suitable for high-throughput imaging. Thus, the large FOV PET scanner can support drug discovery and bridging research through rapid pharmacological and pathological evaluation.

2.
Phys Med Biol ; 69(10)2024 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-38640921

RESUMO

Objective.This study aims to introduce a novel back projection-induced U-Net-shaped architecture, called ReconU-Net, based on the original U-Net architecture for deep learning-based direct positron emission tomography (PET) image reconstruction. Additionally, our objective is to visualize the behavior of direct PET image reconstruction by comparing the proposed ReconU-Net architecture with the original U-Net architecture and existing DeepPET encoder-decoder architecture without skip connections.Approach. The proposed ReconU-Net architecture uniquely integrates the physical model of the back projection operation into the skip connection. This distinctive feature facilitates the effective transfer of intrinsic spatial information from the input sinogram to the reconstructed image via an embedded physical model. The proposed ReconU-Net was trained using Monte Carlo simulation data from the Brainweb phantom and tested on both simulated and real Hoffman brain phantom data.Main results. The proposed ReconU-Net method provided better reconstructed image in terms of the peak signal-to-noise ratio and contrast recovery coefficient than the original U-Net and DeepPET methods. Further analysis shows that the proposed ReconU-Net architecture has the ability to transfer features of multiple resolutions, especially non-abstract high-resolution information, through skip connections. Unlike the U-Net and DeepPET methods, the proposed ReconU-Net successfully reconstructed the real Hoffman brain phantom, despite limited training on simulated data.Significance. The proposed ReconU-Net can improve the fidelity of direct PET image reconstruction, even with small training datasets, by leveraging the synergistic relationship between data-driven modeling and the physics model of the imaging process.


Assuntos
Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Tomografia por Emissão de Pósitrons , Processamento de Imagem Assistida por Computador/métodos , Encéfalo/diagnóstico por imagem , Aprendizado Profundo , Método de Monte Carlo , Humanos
4.
Radiol Phys Technol ; 17(1): 24-46, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38319563

RESUMO

This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Redes Neurais de Computação , Algoritmos , Imagens de Fantasmas
5.
PLoS One ; 19(2): e0298132, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38349916

RESUMO

PURPOSE: Measurements of macular pigment optical density (MPOD) using the autofluorescence spectroscopy yield underestimations of actual values in eyes with cataracts. Previously, we proposed a correction method for this error using deep learning (DL); however, the correction performance was validated through internal cross-validation. This cross-sectional study aimed to validate this approach using an external validation dataset. METHODS: MPODs at 0.25°, 0.5°, 1°, and 2° eccentricities and macular pigment optical volume (MPOV) within 9° eccentricity were measured using SPECTRALIS (Heidelberg Engineering, Heidelberg, Germany) in 197 (training dataset inherited from our previous study) and 157 eyes (validating dataset) before and after cataract surgery. A DL model was trained to predict the corrected value from the pre-operative value using the training dataset, and we measured the discrepancy between the corrected value and the actual postoperative value. Subsequently, the prediction performance was validated using a validation dataset. RESULTS: Using the validation dataset, the mean absolute values of errors for MPOD and MPOV corrected using DL ranged from 8.2 to 12.4%, which were lower than values with no correction (P < 0.001, linear mixed model with Tukey's test). The error depended on the autofluorescence image quality used to calculate MPOD. The mean errors in high and moderate quality images ranged from 6.0 to 11.4%, which were lower than those of poor quality images. CONCLUSION: The usefulness of the DL correction method was validated. Deep learning reduced the error for a relatively good autofluorescence image quality. Poor-quality images were not corrected.


Assuntos
Catarata , Aprendizado Profundo , Pigmento Macular , Humanos , Luteína , Estudos Transversais , Zeaxantinas , Catarata/terapia , Análise Espectral
6.
IEEE Trans Med Imaging ; 43(5): 1654-1663, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38109238

RESUMO

Direct positron emission imaging (dPEI), which does not require a mathematical reconstruction step, is a next-generation molecular imaging modality. To maximize the practical applicability of the dPEI system to clinical practice, we introduce a novel reconstruction-free image-formation method called direct µCompton imaging, which directly localizes the interaction position of Compton scattering from the annihilation photons in a three-dimensional space by utilizing the same compact geometry as that for dPEI, involving ultrafast time-of-flight radiation detectors. This unique imaging method not only provides the anatomical information about an object but can also be applied to attenuation correction of dPEI images. Evaluations through Monte Carlo simulation showed that functional and anatomical hybrid images can be acquired using this multimodal imaging system. By fusing the images, it is possible to simultaneously access various object data, which ensures the synergistic effect of the two imaging methodologies. In addition, attenuation correction improves the quantification of dPEI images. The realization of the whole reconstruction-free imaging system from image generation to quantitative correction provides a new perspective in molecular imaging.


Assuntos
Processamento de Imagem Assistida por Computador , Método de Monte Carlo , Imagens de Fantasmas , Tomografia por Emissão de Pósitrons , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Tomografia por Emissão de Pósitrons/instrumentação , Algoritmos , Humanos , Simulação por Computador
7.
Phys Med Biol ; 68(15)2023 07 21.
Artigo em Inglês | MEDLINE | ID: mdl-37406637

RESUMO

Objective. Deep image prior (DIP) has recently attracted attention owing to its unsupervised positron emission tomography (PET) image reconstruction method, which does not require any prior training dataset. In this paper, we present the first attempt to implement an end-to-end DIP-based fully 3D PET image reconstruction method that incorporates a forward-projection model into a loss function.Approach. A practical implementation of a fully 3D PET image reconstruction could not be performed at present because of a graphics processing unit memory limitation. Consequently, we modify the DIP optimization to a block iteration and sequential learning of an ordered sequence of block sinograms. Furthermore, the relative difference penalty (RDP) term is added to the loss function to enhance the quantitative accuracy of the PET image.Main results. We evaluated our proposed method using Monte Carlo simulation with [18F]FDG PET data of a human brain and a preclinical study on monkey-brain [18F]FDG PET data. The proposed method was compared with the maximum-likelihood expectation maximization (EM), maximuma posterioriEM with RDP, and hybrid DIP-based PET reconstruction methods. The simulation results showed that, compared with other algorithms, the proposed method improved the PET image quality by reducing statistical noise and better preserved the contrast of brain structures and inserted tumors. In the preclinical experiment, finer structures and better contrast recovery were obtained with the proposed method.Significance.The results indicated that the proposed method could produce high-quality images without a prior training dataset. Thus, the proposed method could be a key enabling technology for the straightforward and practical implementation of end-to-end DIP-based fully 3D PET image reconstruction.


Assuntos
Fluordesoxiglucose F18 , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X , Tomografia por Emissão de Pósitrons/métodos , Algoritmos , Imagens de Fantasmas
8.
IEEE Trans Med Imaging ; 42(6): 1822-1834, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37022039

RESUMO

List-mode positron emission tomography (PET) image reconstruction is an important tool for PET scanners with many lines-of-response and additional information such as time-of-flight and depth-of-interaction. Deep learning is one possible solution to enhance the quality of PET image reconstruction. However, the application of deep learning techniques to list-mode PET image reconstruction has not been progressed because list data is a sequence of bit codes and unsuitable for processing by convolutional neural networks (CNN). In this study, we propose a novel list-mode PET image reconstruction method using an unsupervised CNN called deep image prior (DIP) which is the first trial to integrate list-mode PET image reconstruction and CNN. The proposed list-mode DIP reconstruction (LM-DIPRecon) method alternatively iterates the regularized list-mode dynamic row action maximum likelihood algorithm (LM-DRAMA) and magnetic resonance imaging conditioned DIP (MR-DIP) using an alternating direction method of multipliers. We evaluated LM-DIPRecon using both simulation and clinical data, and it achieved sharper images and better tradeoff curves between contrast and noise than the LM-DRAMA, MR-DIP and sinogram-based DIPRecon methods. These results indicated that the LM-DIPRecon is useful for quantitative PET imaging with limited events while keeping accurate raw data information. In addition, as list data has finer temporal information than dynamic sinograms, list-mode deep image prior reconstruction is expected to be useful for 4D PET imaging and motion correction.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Movimento (Física) , Simulação por Computador , Algoritmos , Imagens de Fantasmas
9.
Phys Med Biol ; 68(1)2022 12 22.
Artigo em Inglês | MEDLINE | ID: mdl-36560889

RESUMO

Objective. The aim of this study is to evaluate the performance characteristics of a brain positron emission tomography (PET) scanner composed of four-layer independent read-out time-of-flight depth-of-interaction (TOF-DOI) detectors capable of first interaction position (FIP) detection, using Geant4 application for tomographic emission(GATE). This includes the spatial resolution, sensitivity, count rate capability, and reconstructed image quality.Approach. The proposed TOF-DOI PET detector comprises four layers of a 50 × 50 cerium-doped lutetium-yttrium oxyorthosilicate (LYSO:Ce) scintillator array with 1 mm pitch size, coupled to a 16 × 16 multi-pixel photon counter array with 3.0 mm × 3.0 mm photosensitive segments. Along the direction distant from the center field-of-view (FOV), the scintillator thickness of the four layers is 2.5, 3, 4, and 6 mm. The four layers were simulated with a 150 ps coincidence time resolution and the independent readout make the FIP detection capable. The spatial resolution and imaging performance were compared among the true-FIP, winner-takes-all (WTA) and front-layer FIP (FL-FIP) methods (FL-FIP selects the interaction position located on the front-most interaction layer in all the interaction layers). The National Electrical Manufacturers Association NU 2-2018 procedure was referred and modified to evaluate the performance of proposed scanner.Main results. In detector evaluation, the intrinsic spatial resolutions were 0.52 and 0.76 mm full width at half-maximum (FWHM) at 0° and 30° incidentγ-rays in the first layer pair, respectively. The reconstructed spatial resolution by the filter backprojection (FBP) achieved sub-millimeter FWHM on average over the whole FOV. The maximum true count rate was 207.6 kcps at 15 kBq ml-1and the noise equivalent count rate (NECR_2R) was 54.7 kcps at 6.0 kBq ml-1. Total sensitivity was 45.2 cps kBq-1and 48.4 cps kBq-1at the center and 10 cm off-center FOV, respectively. The TOF and DOI reconstructions significantly improved the image quality in the phantom studies. Moreover, the FL-FIP outperformed the conventional WTA method in terms of the spatial resolution and image quality.Significance. The proposed brain PET scanner could achieve sub-millimeter spatial resolution and high image quality with TOF and DOI reconstruction, which is meaningful to the clinical oncology research. Meanwhile, the comparison among the three positioning methods indicated that the FL-FIP decreased the image degradation caused by Compton scatter more than WTA.


Assuntos
Tomografia por Emissão de Pósitrons , Tomografia Computadorizada por Raios X , Tomografia por Emissão de Pósitrons/métodos , Silicatos , Encéfalo/diagnóstico por imagem , Imagens de Fantasmas , Desenho de Equipamento
10.
Radiol Phys Technol ; 15(1): 72-82, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35132574

RESUMO

Although deep learning for application in positron emission tomography (PET) image reconstruction has attracted the attention of researchers, the image quality must be further improved. In this study, we propose a novel convolutional neural network (CNN)-based fast time-of-flight PET (TOF-PET) image reconstruction method to fully utilize the direction information of coincidence events. The proposed method inputs view-grouped histo-images into a 3D CNN as a multi-channel image to use the direction information of such events. We evaluated the proposed method using Monte Carlo simulation data obtained from a digital brain phantom. Compared with a case without direction information, the peak signal-to-noise ratio and structural similarity were improved by 1.2 dB and 0.02, respectively, at a coincidence time resolution of 300 ps. The calculation times of the proposed method were significantly lower than those of a conventional iterative reconstruction. These results indicate that the proposed method improves both the speed and image quality of a TOF-PET image reconstruction.


Assuntos
Aprendizado Profundo , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Imagens de Fantasmas , Tomografia por Emissão de Pósitrons/métodos
11.
Phys Med Biol ; 67(4)2022 02 11.
Artigo em Inglês | MEDLINE | ID: mdl-35100575

RESUMO

Objective.Convolutional neural networks (CNNs) are a strong tool for improving the coincidence time resolution (CTR) of time-of-flight (TOF) positron emission tomography detectors. However, several signal waveforms from multiple source positions are required for CNN training. Furthermore, there is concern that TOF estimation is biased near the edge of the training space, despite the reduced estimation variance (i.e. timing uncertainty).Approach.We propose a simple method for unbiased TOF estimation by combining a conventional leading-edge discriminator (LED) and a CNN that can be trained with waveforms collected from one source position. The proposed method estimates and corrects the time difference error calculated by the LED rather than the absolute time difference. This model can eliminate the TOF estimation bias, as the combination with the LED converts the distribution of the label data from discrete values at each position into a continuous symmetric distribution.Main results.Evaluation results using signal waveforms collected from scintillation detectors show that the proposed method can correctly estimate all source positions without bias from a single source position. Moreover, the proposed method improves the CTR of the conventional LED.Significance.We believe that the improved CTR will not only increase the signal-to-noise ratio but will also contribute significantly to a part of the direct positron emission imaging.


Assuntos
Fótons , Contagem de Cintilação , Redes Neurais de Computação , Tomografia por Emissão de Pósitrons/métodos , Contagem de Cintilação/métodos , Razão Sinal-Ruído
12.
Med Image Anal ; 74: 102226, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34563861

RESUMO

Although supervised convolutional neural networks (CNNs) often outperform conventional alternatives for denoising positron emission tomography (PET) images, they require many low- and high-quality reference PET image pairs. Herein, we propose an unsupervised 3D PET image denoising method based on an anatomical information-guided attention mechanism. The proposed magnetic resonance-guided deep decoder (MR-GDD) utilizes the spatial details and semantic features of MR-guidance image more effectively by introducing encoder-decoder and deep decoder subnetworks. Moreover, the specific shapes and patterns of the guidance image do not affect the denoised PET image, because the guidance image is input to the network through an attention gate. In a Monte Carlo simulation of [18F]fluoro-2-deoxy-D-glucose (FDG), the proposed method achieved the highest peak signal-to-noise ratio and structural similarity (27.92 ± 0.44 dB/0.886 ± 0.007), as compared with Gaussian filtering (26.68 ± 0.10 dB/0.807 ± 0.004), image guided filtering (27.40 ± 0.11 dB/0.849 ± 0.003), deep image prior (DIP) (24.22 ± 0.43 dB/0.737 ± 0.017), and MR-DIP (27.65 ± 0.42 dB/0.879 ± 0.007). Furthermore, we experimentally visualized the behavior of the optimization process, which is often unknown in unsupervised CNN-based restoration problems. For preclinical (using [18F]FDG and [11C]raclopride) and clinical (using [18F]florbetapir) studies, the proposed method demonstrates state-of-the-art denoising performance while retaining spatial resolution and quantitative accuracy, despite using a common network architecture for various noisy PET images with 1/10th of the full counts. These results suggest that the proposed MR-GDD can reduce PET scan times and PET tracer doses considerably without impacting patients.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Fluordesoxiglucose F18 , Humanos , Redes Neurais de Computação , Razão Sinal-Ruído
13.
Transl Vis Sci Technol ; 10(2): 18, 2021 02 05.
Artigo em Inglês | MEDLINE | ID: mdl-34003903

RESUMO

Purpose: Measurements of macular pigment optical density (MPOD) by the autofluorescence technique yield underestimations of actual values in eyes with cataract. We applied deep learning (DL) to correct this error. Subjects and Methods: MPOD was measured by SPECTRALIS (Heidelberg Engineering, Heidelberg, Germany) in 197 eyes before and after cataract surgery. The nominal MPOD values (= preoperative value) were corrected by three methods: the regression equation (RE) method, subjective classification (SC) method (described in our previous study), and DL method. The errors between the corrected and true values (= postoperative value) were calculated for local MPODs at 0.25°, 0.5°, 1°, and 2° eccentricities and macular pigment optical volume (MPOV) within 9° eccentricity. Results: The mean error for MPODs at four eccentricities was 32% without any correction, 15% with correction by RE, 16% with correction by SC, and 14% with correction by DL. The mean error for MPOV was 21% without correction and 14%, 10%, and 10%, respectively, with correction by the same methods. The errors with any correction were significantly lower than those without correction (P < 0.001, linear mixed model with Tukey's test). The errors with DL correction were significantly lower than those with RE correction in MPOD at 1° eccentricity and MPOV (P < 0.001) and were equivalent to those with SC correction. Conclusions: The objective method using DL was useful to correct MPOD values measured in aged people. Translational Relevance: MPOD can be obtained with small errors in eyes with cataract using DL.


Assuntos
Catarata , Aprendizado Profundo , Pigmento Macular , Idoso , Alemanha , Humanos , Luteína , Zeaxantinas
14.
Ann Nucl Med ; 35(6): 691-701, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33811600

RESUMO

OBJECTIVES: Attenuation correction (AC) is crucial for ensuring the quantitative accuracy of positron emission tomography (PET) imaging. However, obtaining accurate µ-maps from brain-dedicated PET scanners without AC acquisition mechanism is challenging. Therefore, to overcome these problems, we developed a deep learning-based PET AC (deep AC) framework to synthesize transmission computed tomography (TCT) images from non-AC (NAC) PET images using a convolutional neural network (CNN) with a huge dataset of various radiotracers for brain PET imaging. METHODS: The proposed framework is comprised of three steps: (1) NAC PET image generation, (2) synthetic TCT generation using CNN, and (3) PET image reconstruction. We trained the CNN by combining the mixed image dataset of six radiotracers to avoid overfitting, including [18F]FDG, [18F]BCPP-EF, [11C]Racropride, [11C]PIB, [11C]DPA-713, and [11C]PBB3. We used 1261 brain NAC PET and TCT images (1091 for training and 70 for testing). We did not include [11C]Methionine subjects in the training dataset, but included them in the testing dataset. RESULTS: The image quality of the synthetic TCT images obtained using the CNN trained on the mixed dataset of six radiotracers was superior to those obtained using the CNN trained on the split dataset generated from each radiotracer. In the [18F]FDG study, the mean relative PET biases of the emission-segmented AC (ESAC) and deep AC were 8.46 ± 5.24 and - 5.69 ± 4.97, respectively. The deep AC PET and TCT AC PET images exhibited excellent correlation for all seven radiotracers (R2 = 0.912-0.982). CONCLUSION: These results indicate that our proposed deep AC framework can be leveraged to provide quantitatively superior PET images when using the CNN trained on the mixed dataset of PET tracers than when using the CNN trained on the split dataset which means specific for each tracer.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Fluordesoxiglucose F18 , Imagem Multimodal
15.
Phys Med Biol ; 66(1): 015006, 2021 01 14.
Artigo em Inglês | MEDLINE | ID: mdl-33227725

RESUMO

Although convolutional neural networks (CNNs) demonstrate the superior performance in denoising positron emission tomography (PET) images, a supervised training of the CNN requires a pair of large, high-quality PET image datasets. As an unsupervised learning method, a deep image prior (DIP) has recently been proposed; it can perform denoising with only the target image. In this study, we propose an innovative procedure for the DIP approach with a four-dimensional (4D) branch CNN architecture in end-to-end training to denoise dynamic PET images. Our proposed 4D CNN architecture can be applied to end-to-end dynamic PET image denoising by introducing a feature extractor and a reconstruction branch for each time frame of the dynamic PET image. In the proposed DIP method, it is not necessary to prepare high-quality and large patient-related PET images. Instead, a subject's own static PET image is used as additional information, dynamic PET images are treated as training labels, and denoised dynamic PET images are obtained from the CNN outputs. Both simulation with [18F]fluoro-2-deoxy-D-glucose (FDG) and preclinical data with [18F]FDG and [11C]raclopride were used to evaluate the proposed framework. The results showed that our 4D DIP framework quantitatively and qualitatively outperformed 3D DIP and other unsupervised denoising methods. The proposed 4D DIP framework thus provides a promising procedure for dynamic PET image denoising.


Assuntos
Encéfalo/diagnóstico por imagem , Fluordesoxiglucose F18/metabolismo , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Redes Neurais de Computação , Tomografia por Emissão de Pósitrons/métodos , Animais , Encéfalo/metabolismo , Haplorrinos , Humanos , Compostos Radiofarmacêuticos/metabolismo
16.
Phys Med Biol ; 62(17): 7148-7166, 2017 Aug 18.
Artigo em Inglês | MEDLINE | ID: mdl-28753133

RESUMO

A high-resolution positron emission tomography (PET) scanner, dedicated to brain studies, was developed and its performance was evaluated. A four-layer depth of interaction detector was designed containing five detector units axially lined up per layer board. Each of the detector units consists of a finely segmented (1.2 mm) LYSO scintillator array and an 8 × 8 array of multi-pixel photon counters. Each detector layer has independent front-end and signal processing circuits, and the four detector layers are assembled as a detector module. The new scanner was designed to form a detector ring of 430 mm diameter with 32 detector modules and 168 detector rings with a 1.2 mm pitch. The total crystal number is 655 360. The transaxial and axial field of views (FOVs) are 330 mm in diameter and 201.6 mm, respectively, which are sufficient to measure a whole human brain. The single-event data generated at each detector module were transferred to the data acquisition servers through optical fiber cables. The single-event data from all detector modules were merged and processed to create coincidence event data in on-the-fly software in the data acquisition servers. For image reconstruction, the high-resolution mode (HR-mode) used a 1.2 mm2 crystal segment size and the high-speed mode (HS-mode) used a 4.8 mm2 size by collecting 16 crystal segments of 1.2 mm each to reduce the computational cost. The performance of the brain PET scanner was evaluated. For the intrinsic spatial resolution of the detector module, coincidence response functions of the detector module pair, which faced each other at various angles, were measured by scanning a 0.25 mm diameter 22Na point source. The intrinsic resolutions were obtained with 1.08 mm full width at half-maximum (FWHM) and 1.25 mm FWHM on average at 0 and 22.5 degrees in the first layer pair, respectively. The system spatial resolutions were less than 1.0 mm FWHM throughout the whole FOV, using a list-mode dynamic RAMLA (LM-DRAMA). The system sensitivity was 21.4 cps kBq-1 as measured using an 18F line source aligned with the center of the transaxial FOV. High count rate capability was evaluated using a cylindrical phantom (20 cm diameter × 70 cm length), resulting in 249 kcps in true and 27.9 kcps at 11.9 kBq ml-1 at the peak count in a noise equivalent count rate (NECR_2R). Single-event data acquisition and on-the-fly software coincidence detection performed well, exceeding 25 Mcps and 2.3 Mcps for single and coincidence count rates, respectively. Using phantom studies, we also demonstrated its imaging capabilities by means of a 3D Hoffman brain phantom and an ultra-micro hot-spot phantom. The images obtained were of acceptable quality for high-resolution determination. As clinical and pre-clinical studies, we imaged brains of a human and of small animals.


Assuntos
Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/instrumentação , Imagens de Fantasmas , Fótons , Tomografia por Emissão de Pósitrons/instrumentação , Tomografia por Emissão de Pósitrons/métodos , Animais , Desenho de Equipamento , Humanos , Processamento de Imagem Assistida por Computador/métodos , Camundongos , Ratos , Ratos Sprague-Dawley
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...