Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 121
Filtrar
1.
J Nucl Med ; 2024 Jun 13.
Artigo em Inglês | MEDLINE | ID: mdl-38871391

RESUMO

The collaboration of Yale, the University of California, Davis, and United Imaging Healthcare has successfully developed the NeuroEXPLORER, a dedicated human brain PET imager with high spatial resolution, high sensitivity, and a built-in 3-dimensional camera for markerless continuous motion tracking. It has high depth-of-interaction and time-of-flight resolutions, along with a 52.4-cm transverse field of view (FOV) and an extended axial FOV (49.5 cm) to enhance sensitivity. Here, we present the physical characterization, performance evaluation, and first human images of the NeuroEXPLORER. Methods: Measurements of spatial resolution, sensitivity, count rate performance, energy and timing resolution, and image quality were performed adhering to the National Electrical Manufacturers Association (NEMA) NU 2-2018 standard. The system's performance was demonstrated through imaging studies of the Hoffman 3-dimensional brain phantom and the mini-Derenzo phantom. Initial 18F-FDG images from a healthy volunteer are presented. Results: With filtered backprojection reconstruction, the radial and tangential spatial resolutions (full width at half maximum) averaged 1.64, 2.06, and 2.51 mm, with axial resolutions of 2.73, 2.89, and 2.93 mm for radial offsets of 1, 10, and 20 cm, respectively. The average time-of-flight resolution was 236 ps, and the energy resolution was 10.5%. NEMA sensitivities were 46.0 and 47.6 kcps/MBq at the center and 10-cm offset, respectively. A sensitivity of 11.8% was achieved at the FOV center. The peak noise-equivalent count rate was 1.31 Mcps at 58.0 kBq/mL, and the scatter fraction at 5.3 kBq/mL was 36.5%. The maximum count rate error at the peak noise-equivalent count rate was less than 5%. At 3 iterations, the NEMA image-quality contrast recovery coefficients varied from 74.5% (10-mm sphere) to 92.6% (37-mm sphere), and background variability ranged from 3.1% to 1.4% at a contrast of 4.0:1. An example human brain 18F-FDG image exhibited very high resolution, capturing intricate details in the cortex and subcortical structures. Conclusion: The NeuroEXPLORER offers high sensitivity and high spatial resolution. With its long axial length, it also enables high-quality spinal cord imaging and image-derived input functions from the carotid arteries. These performance enhancements will substantially broaden the range of human brain PET paradigms, protocols, and thereby clinical research applications.

2.
Commun Med (Lond) ; 4(1): 117, 2024 Jun 13.
Artigo em Inglês | MEDLINE | ID: mdl-38872007

RESUMO

BACKGROUND: Mobile upright PET devices have the potential to enable previously impossible neuroimaging studies. Currently available options are imagers with deep brain coverage that severely limit head/body movements or imagers with upright/motion enabling properties that are limited to only covering the brain surface. METHODS: In this study, we test the feasibility of an upright, motion-compatible brain imager, our Ambulatory Motion-enabling Positron Emission Tomography (AMPET) helmet prototype, for use as a neuroscience tool by replicating a variant of a published PET/fMRI study of the neurocorrelates of human walking. We validate our AMPET prototype by conducting a walking movement paradigm to determine motion tolerance and assess for appropriate task related activity in motor-related brain regions. Human participants (n = 11 patients) performed a walking-in-place task with simultaneous AMPET imaging, receiving a bolus delivery of F18-Fluorodeoxyglucose. RESULTS: Here we validate three pre-determined measure criteria, including brain alignment motion artifact of less than <2 mm and functional neuroimaging outcomes consistent with existing walking movement literature. CONCLUSIONS: The study extends the potential and utility for use of mobile, upright, and motion-tolerant neuroimaging devices in real-world, ecologically-valid paradigms. Our approach accounts for the real-world logistics of an actual human participant study and can be used to inform experimental physicists, engineers and imaging instrumentation developers undertaking similar future studies. The technical advances described herein help set new priorities for facilitating future neuroimaging devices and research of the human brain in health and disease.


Brain imaging plays an important role in understanding how the human brain functions in both health and disease. However, traditional brain scanners often require people to remain still, limiting the study of the brain in motion, and excluding people who cannot remain still. To overcome this, our team developed an imager that moves with a person's head, which uses a suspended ring of lightweight detectors that fit to the head. Using our imager, we were able to obtain clear brain images of people walking in place that showed the expected brain activity patterns during walking. Further development of our imager could enable it to be used to better understand real-world brain function and behavior, enabling enhanced knowledge and treatment of neurological conditions.

3.
Neuroimage ; 293: 120611, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38643890

RESUMO

Dynamic PET allows quantification of physiological parameters through tracer kinetic modeling. For dynamic imaging of brain or head and neck cancer on conventional PET scanners with a short axial field of view, the image-derived input function (ID-IF) from intracranial blood vessels such as the carotid artery (CA) suffers from severe partial volume effects. Alternatively, optimization-derived input function (OD-IF) by the simultaneous estimation (SIME) method does not rely on an ID-IF but derives the input function directly from the data. However, the optimization problem is often highly ill-posed. We proposed a new method that combines the ideas of OD-IF and ID-IF together through a kernel framework. While evaluation of such a method is challenging in human subjects, we used the uEXPLORER total-body PET system that covers major blood pools to provide a reference for validation. METHODS: The conventional SIME approach estimates an input function using a joint estimation together with kinetic parameters by fitting time activity curves from multiple regions of interests (ROIs). The input function is commonly parameterized with a highly nonlinear model which is difficult to estimate. The proposed kernel SIME method exploits the CA ID-IF as a priori information via a kernel representation to stabilize the SIME approach. The unknown parameters are linear and thus easier to estimate. The proposed method was evaluated using 18F-fluorodeoxyglucose studies with both computer simulations and 20 human-subject scans acquired on the uEXPLORER scanner. The effect of the number of ROIs on kernel SIME was also explored. RESULTS: The estimated OD-IF by kernel SIME showed a good match with the reference input function and provided more accurate estimation of kinetic parameters for both simulation and human-subject data. The kernel SIME led to the highest correlation coefficient (R = 0.97) and the lowest mean absolute error (MAE = 10.5 %) compared to using the CA ID-IF (R = 0.86, MAE = 108.2 %) and conventional SIME (R = 0.57, MAE = 78.7 %) in the human-subject evaluation. Adding more ROIs improved the overall performance of the kernel SIME method. CONCLUSION: The proposed kernel SIME method shows promise to provide an accurate estimation of the blood input function and kinetic parameters for brain PET parametric imaging.


Assuntos
Encéfalo , Tomografia por Emissão de Pósitrons , Humanos , Tomografia por Emissão de Pósitrons/métodos , Tomografia por Emissão de Pósitrons/normas , Encéfalo/diagnóstico por imagem , Imagem Corporal Total/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos
4.
Artigo em Inglês | MEDLINE | ID: mdl-38500666

RESUMO

Dual-energy computed tomography (DECT) enables material decomposition for tissues and produces additional information for PET/CT imaging to potentially improve the characterization of diseases. PET-enabled DECT (PDECT) allows the generation of PET and DECT images simultaneously with a conventional PET/CT scanner without the need for a second x-ray CT scan. In PDECT, high-energy γ-ray CT (GCT) images at 511 keV are obtained from time-of-flight (TOF) PET data and are combined with the existing x-ray CT images to form DECT imaging. We have developed a kernel-based maximum-likelihood attenuation and activity (MLAA) method that uses x-ray CT images as a priori information for noise suppression. However, our previous studies focused on GCT image reconstruction at the PET image resolution which is coarser than the image resolution of the x-ray CT. In this work, we explored the feasibility of generating super-resolution GCT images at the corresponding CT resolution. The study was conducted using both phantom and patient scans acquired with the uEXPLORER total-body PET/CT system. GCT images at the PET resolution with a pixel size of 4.0 mm × 4.0 mm and at the CT resolution with a pixel size of 1.2 mm × 1.2 mm were reconstructed using both the standard MLAA and kernel MLAA methods. The results indicated that the GCT images at the CT resolution had sharper edges and revealed more structural details compared to the images reconstructed at the PET resolution. Furthermore, images from the kernel MLAA method showed substantially improved image quality compared to those obtained with the standard MLAA method.

5.
ArXiv ; 2024 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-38351944

RESUMO

X-ray computed tomography (CT) in PET/CT is commonly operated with a single energy, resulting in a limitation of lacking tissue composition information. Dual-energy (DE) spectral CT enables material decomposition by using two different x-ray energies and may be combined with PET for improved multimodality imaging, but would either require hardware upgrade or increase radiation dose due to the added second x-ray CT scan. Recently proposed PET-enabled DECT method allows dual-energy spectral imaging using a conventional PET/CT scanner without the need for a second x-ray CT scan. A gamma-ray CT (gCT) image at 511 keV can be generated from the existing time-of-flight PET data with the maximum-likelihood attenuation and activity (MLAA) approach and is then combined with the low-energy x-ray CT image to form dual-energy spectral imaging. To improve the image quality of gCT, a kernel MLAA method was further proposed by incorporating x-ray CT as a priori information. The concept of this PET-enabled DECT has been validated using simulation studies, but not yet with 3D real data. In this work, we developed a general open-source implementation for gCT reconstruction from PET data and use this implementation for the first real data validation with both a physical phantom study and a human subject study on a uEXPLORER total-body PET/CT system. These results have demonstrated the feasibility of this method for spectral imaging and material decomposition.

6.
IEEE Trans Med Imaging ; 43(6): 2148-2158, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38261489

RESUMO

Positron emission tomography (PET) is a widely utilized medical imaging modality that uses positron-emitting radiotracers to visualize biochemical processes in a living body. The spatiotemporal distribution of a radiotracer is estimated by detecting the coincidence photon pairs generated through positron annihilations. In human tissue, about 40% of the positrons form positroniums prior to the annihilation. The lifetime of these positroniums is influenced by the microenvironment in the tissue and could provide valuable information for better understanding of disease progression and treatment response. Currently, there are few methods available for reconstructing high-resolution lifetime images in practical applications. This paper presents an efficient statistical image reconstruction method for positronium lifetime imaging (PLI). We also analyze the random triple-coincidence events in PLI and propose a correction method for random events, which is essential for real applications. Both simulation and experimental studies demonstrate that the proposed method can produce lifetime images with high numerical accuracy, low variance, and resolution comparable to that of the activity images generated by a PET scanner with currently available time-of-flight resolution.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Tomografia por Emissão de Pósitrons , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Humanos , Simulação por Computador
7.
Med Phys ; 50(10): 6047-6059, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37538038

RESUMO

BACKGROUND: Physiological motion, such as respiratory motion, has become a limiting factor in the spatial resolution of positron emission tomography (PET) imaging as the resolution of PET detectors continue to improve. Motion-induced misregistration between PET and CT images can also cause attenuation correction artifacts. Respiratory gating can be used to freeze the motion and to reduce motion induced artifacts. PURPOSE: In this study, we propose a robust data-driven approach using an unsupervised deep clustering network that employs an autoencoder (AE) to extract latent features for respiratory gating. METHODS: We first divide list-mode PET data into short-time frames. The short-time frame images are reconstructed without attenuation, scatter, or randoms correction to avoid attenuation mismatch artifacts and to reduce image reconstruction time. The deep AE is then trained using reconstructed short-time frame images to extract latent features for respiratory gating. No additional data are required for the AE training. K-means clustering is subsequently used to perform respiratory gating based on the latent features extracted by the deep AE. The effectiveness of our proposed Deep Clustering method was evaluated using physical phantom and real patient datasets. The performance was compared against phase gating based on an external signal (External) and image based principal component analysis (PCA) with K-means clustering (Image PCA). RESULTS: The proposed method produced gated images with higher contrast and sharper myocardium boundaries than those obtained using the External gating method and Image PCA. Quantitatively, the gated images generated by the proposed Deep Clustering method showed larger center of mass (COM) displacement and higher lesion contrast than those obtained using the other two methods. CONCLUSIONS: The effectiveness of our proposed method was validated using physical phantom and real patient data. The results showed our proposed framework could provide superior gating than the conventional External method and Image PCA.

8.
IEEE Trans Biomed Eng ; 70(10): 2863-2873, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37043314

RESUMO

Intraoperative identification of head and neck cancer tissue is essential to achieve complete tumor resection and mitigate tumor recurrence. Mesoscopic fluorescence lifetime imaging (FLIm) of intrinsic tissue fluorophores emission has demonstrated the potential to demarcate the extent of the tumor in patients undergoing surgical procedures of the oral cavity and the oropharynx. Here, we report FLIm-based classification methods using standard machine learning models that account for the diverse anatomical and biochemical composition across the head and neck anatomy to improve tumor region identification. Three anatomy-specific binary classification models were developed (i.e., "base of tongue," "palatine tonsil," and "oral tongue"). FLIm data from patients (N = 85) undergoing upper aerodigestive oncologic surgery were used to train and validate the classification models using a leave-one-patient-out cross-validation method. These models were evaluated for two classification tasks: (1) to discriminate between healthy and cancer tissue, and (2) to apply the binary classification model trained on healthy and cancer to discriminate dysplasia through transfer learning. This approach achieved superior classification performance compared to models that are anatomy-agnostic; specifically, a ROC-AUC of 0.94 was for the first task and 0.92 for the second. Furthermore, the model demonstrated detection of dysplasia, highlighting the generalization of the FLIm-based classifier. Current findings demonstrate that a classifier that accounts for tumor location can improve the ability to accurately identify surgical margins and underscore FLIm's potential as a tool for surgical guidance in head and neck cancer patients, including those subjects of robotic surgery.


Assuntos
Neoplasias de Cabeça e Pescoço , Procedimentos Cirúrgicos Robóticos , Humanos , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Neoplasias de Cabeça e Pescoço/cirurgia , Imagem Óptica/métodos , Pescoço , Língua
9.
J Digit Imaging ; 36(3): 1049-1059, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36854923

RESUMO

Deep learning (DL) has been proposed to automate image segmentation and provide accuracy, consistency, and efficiency. Accurate segmentation of lipomatous tumors (LTs) is critical for correct tumor radiomics analysis and localization. The major challenge of this task is data heterogeneity, including tumor morphological characteristics and multicenter scanning protocols. To mitigate the issue, we aimed to develop a DL-based Super Learner (SL) ensemble framework with different data correction and normalization methods. Pathologically proven LTs on pre-operative T1-weighted/proton-density MR images of 185 patients were manually segmented. The LTs were categorized by tumor locations as distal upper limb (DUL), distal lower limb (DLL), proximal upper limb (PUL), proximal lower limb (PLL), or Trunk (T) and grouped by 80%/9%/11% for training, validation and testing. Six configurations of correction/normalization were applied to data for fivefold-cross-validation trainings, resulting in 30 base learners (BLs). A SL was obtained from the BLs by optimizing SL weights. The performance was evaluated by dice-similarity-coefficient (DSC), sensitivity, specificity, and Hausdorff distance (HD95). For predictions of the BLs, the average DSC, sensitivity, and specificity from the testing data were 0.72 [Formula: see text] 0.16, 0.73 [Formula: see text] 0.168, and 0.99 [Formula: see text] 0.012, respectively, while for SL predictions were 0.80 [Formula: see text] 0.184, 0.78 [Formula: see text] 0.193, and 1.00 [Formula: see text] 0.010. The average HD95 of the BLs were 11.5 (DUL), 23.2 (DLL), 25.9 (PUL), 32.1 (PLL), and 47.9 (T) mm, whereas of SL were 1.7, 8.4, 15.9, 2.2, and 36.6 mm, respectively. The proposed method could improve the segmentation accuracy and mitigate the performance instability and data heterogeneity aiding the differential diagnosis of LTs in real clinical situations.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Inteligência Artificial
10.
Med Phys ; 2023 Jan 18.
Artigo em Inglês | MEDLINE | ID: mdl-36651630

RESUMO

BACKGROUND: Positron emission tomography (PET) has had a transformative impact on oncological and neurological applications. However, still much of PET's potential remains untapped with limitations primarily driven by low spatial resolution, which severely hampers accurate quantitative PET imaging via the partial volume effect (PVE). PURPOSE: We present experimental results of a practical and cost-effective ultra-high resolution brain-dedicated PET scanner, using our depth-encoding Prism-PET detectors arranged along a compact and conformal gantry, showing substantial reduction in PVE and accurate radiotracer uptake quantification in small regions. METHODS: The decagon-shaped prototype scanner has a long diameter of 38.5 cm, a short diameter of 29.1 cm, and an axial field-of-view (FOV) of 25.5 mm with a single ring of 40 Prism-PET detector modules. Each module comprises a 16 × 16 array of 1.5 × 1.5 × 20-mm3 lutetium yttrium oxyorthosillicate (LYSO) scintillator crystals coupled 4-to-1 to an 8 × 8 array of silicon photomultiplier (SiPM) pixels on one end and to a prismatoid light guide array on the opposite end. The scanner's performance was evaluated by measuring depth-of-interaction (DOI) resolution, energy resolution, timing resolution, spatial resolution, sensitivity, and image quality of ultra-micro Derenzo and three-dimensional (3D) Hoffman brain phantoms. RESULTS: The full width at half maximum (FWHM) DOI, energy, and timing resolutions of the scanner are 2.85 mm, 12.6%, and 271 ps, respectively. Not considering artifacts due to mechanical misalignment of detector blocks, the intrinsic spatial resolution is 0.89-mm FWHM. Point source images reconstructed with 3D filtered back-projection (FBP) show an average spatial resolution of 1.53-mm FWHM across the entire FOV. The peak absolute sensitivity is 1.2% for an energy window of 400-650 keV. The ultra-micro Derenzo phantom study demonstrates the highest reported spatial resolution performance for a human brain PET scanner with perfect reconstruction of 1.00-mm diameter hot-rods. Reconstructed images of customized Hoffman brain phantoms prove that Prism-PET enables accurate radiotracer uptake quantification in small brain regions (2-3 mm). CONCLUSIONS: Prism-PET will substantially strengthen the utility of quantitative PET in neurology for early diagnosis of neurodegenerative diseases, and in neuro-oncology for improved management of both primary and metastatic brain tumors.

11.
IEEE Trans Med Imaging ; 42(3): 785-796, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36288234

RESUMO

Image reconstruction of low-count positron emission tomography (PET) data is challenging. Kernel methods address the challenge by incorporating image prior information in the forward model of iterative PET image reconstruction. The kernelized expectation-maximization (KEM) algorithm has been developed and demonstrated to be effective and easy to implement. A common approach for a further improvement of the kernel method would be adding an explicit regularization, which however leads to a complex optimization problem. In this paper, we propose an implicit regularization for the kernel method by using a deep coefficient prior, which represents the kernel coefficient image in the PET forward model using a convolutional neural-network. To solve the maximum-likelihood neural network-based reconstruction problem, we apply the principle of optimization transfer to derive a neural KEM algorithm. Each iteration of the algorithm consists of two separate steps: a KEM step for image update from the projection data and a deep-learning step in the image domain for updating the kernel coefficient image using the neural network. This optimization algorithm is guaranteed to monotonically increase the data likelihood. The results from computer simulations and real patient data have demonstrated that the neural KEM can outperform existing KEM and deep image prior methods.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Simulação por Computador , Redes Neurais de Computação , Algoritmos
12.
Artigo em Inglês | MEDLINE | ID: mdl-36172601

RESUMO

The current generation of total-body positron emission tomography (PET) scanners offer significant sensitivity increase with an extended axial imaging extent. With the large volume of lutetium-based scintillation crystals that are used as detector elements in these scanners, there is an increased flux of background radiation originating from 176Lu decay in the crystals and higher sensitivity for detecting it. Combined with the ability of scanning the entire body in a single bed position, this allows more effective utilization of the lutetium background as a transmission source for estimating 511 keV attenuation coefficients. In this study, utilization of the lutetium background radiation for attenuation correction in total-body PET was studied using Monte Carlo simulations of a 3D whole-body XCAT phantom in the uEXPLORER PET scanner, with particular focus on ultralow-dose PET scans that are now made possible with these scanners. Effects of an increased acceptance angle, reduced scan durations, and Compton scattering on PET quantification were studied. Furthermore, quantification accuracy of lutetium-based attenuation correction was compared for a 20-min scan of the whole body on the uEXPLORER, a one-meter-long, and a conventional 24-cm-long scanner. Quantification and lesion contrast were minimally affected in both long axial field-of-view scanners and in a whole-body 20-min scan, the mean bias in all analyzed organs of interest were within a ±10% range compared to ground-truth activity maps. Quantification was affected in certain organs, when scan duration was reduced to 5 min or a reduced acceptance angle of 17° was used. Analysis of the Compton scattered events suggests that implementing a scatter correction method for the transmission data will be required, and increasing the energy threshold from 250 keV to 290 keV can reduce the computational costs and data rates, with negligible effects on PET quantification. Finally, the current results can serve as groundwork for transferring lutetium-based attenuation correction into research and clinical practice.

13.
IEEE Trans Med Imaging ; 41(10): 2848-2855, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35584079

RESUMO

Positron emission tomography is widely used in clinical and preclinical applications. Positronium lifetime carries information about the tissue microenvironment where positrons are emitted, but such information has not been captured because of two technical challenges. One challenge is the low sensitivity in detecting triple coincidence events. This problem has been mitigated by the recent developments of PET scanners with long (1-2 m) axial field of view. The other challenge is the low spatial resolution of the positronium lifetime images formed by existing methods that is determined by the time-of-flight (TOF) resolution (200-500 ps) of existing PET scanners. This paper solves the second challenge by developing a new image reconstruction method to generate high-resolution positronium lifetime images using existing TOF PET. Simulation studies demonstrate that the proposed method can reconstruct positronium lifetime images at much better spatial resolution than the limit set by the TOF resolution of the PET scanner. The proposed method opens up the possibility of performing positronium lifetime imaging using existing TOF PET scanners. The lifetime information can be used to understand the tissue microenvironment in vivo which could facilitate the study of disease mechanism and selection of proper treatments.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Algoritmos , Simulação por Computador , Imagens de Fantasmas , Tomografia por Emissão de Pósitrons/métodos
14.
Phys Med Biol ; 67(12)2022 06 10.
Artigo em Inglês | MEDLINE | ID: mdl-35609588

RESUMO

Objective.This work assessed the relationship between image signal-to-noise ratio (SNR) and total-body noise-equivalent count rate (NECR)-for both non-time-of-flight (TOF) NECR and TOF-NECR-in a long uniform water cylinder and 14 healthy human subjects using the uEXPLORER total-body PET/CT scanner.Approach.A TOF-NEC expression was modified for list-mode PET data, and both the non-TOF NECR and TOF-NECR were compared using datasets from a long uniform water cylinder and 14 human subjects scanned up to 12 h after radiotracer injection.Main results.The TOF-NECR for the uniform water cylinder was found to be linearly proportional to the TOF-reconstructed image SNR2in the range of radioactivity concentrations studied, but not for non-TOF NECR as indicated by the reducedR2value. The results suggest that the use of TOF-NECR to estimate the count rate performance of TOF-enabled PET systems may be more appropriate for predicting the SNR of TOF-reconstructed images.Significance.Image quality in PET is commonly characterized by image SNR and, correspondingly, the NECR. While the use of NECR for predicting image quality in conventional PET systems is well-studied, the relationship between SNR and NECR has not been examined in detail in long axial field-of-view total-body PET systems, especially for human subjects. Furthermore, the current NEMA NU 2-2018 standard does not account for count rate performance gains due to TOF in the NECR evaluation. The relationship between image SNR and total-body NECR in long axial FOV PET was assessed for the first time using the uEXPLORER total-body PET/CT scanner.


Assuntos
Fluordesoxiglucose F18 , Tomografia por Emissão de Pósitrons , Humanos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Tomografia por Emissão de Pósitrons/métodos , Razão Sinal-Ruído , Água
15.
Med Phys ; 49(5): 3263-3277, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35229904

RESUMO

PURPOSE: Image guidance is used to improve the accuracy of radiation therapy delivery but results in increased dose to patients. This is of particular concern in children who need be treated per Pediatric Image Gently Protocols due to long-term risks from radiation exposure. The purpose of this study is to design a deep neural network architecture and loss function for improving soft-tissue contrast and preserving small anatomical features in ultra-low-dose cone-beam CTs (CBCT) of head and neck cancer (HNC) imaging. METHODS: A 2D compound U-Net architecture (modified U-Net++) with different depths was proposed to enhance the network capability of capturing small-volume structures. A mask weighted loss function (Mask-Loss) was applied to enhance soft-tissue contrast. Fifty-five paired CBCT and CT images of HNC patients were retrospectively collected for network training and testing. The output enhanced CBCT images from the present study were evaluated with quantitative metrics including mean absolute error (MAE), signal-to-noise ratio (SNR), and structural similarity (SSIM), and compared with those from the previously proposed network architectures (U-Net and wide U-Net) using MAE loss functions. A visual assessment of ten selected structures in the enhanced CBCT images of each patient was performed to evaluate image quality improvement, blindly scored by an experienced radiation oncologist specialized in HN cancer. RESULTS: All the enhanced CBCT images showed reduced artifactual distortion and image noise. U-Net++ outperformed the U-Net and wide U-Net in terms of MAE, contrast near structure boundaries, and small structures. The proposed Mask-Loss improved image contrast and accuracy of the soft-tissue regions. The enhanced CBCT images predicted by U-Net++ and Mask-Loss demonstrated improvement compared to the U-Net in terms of average MAE (52.41 vs 42.85 HU), SNR (14.14 vs 15.07 dB), and SSIM (0.84 vs 0.87), respectively ( p < 0.01 $p < 0.01$ , in all paired t-tests). The visual assessment showed that the proposed U-Net++ and Mask-Loss significantly improved original CBCTs ( p < 0.01 $p < 0.01$ ), compared to the U-Net and MAE loss. CONCLUSIONS: The proposed network architecture and loss function effectively improved image quality in soft-tissue contrast, organ boundary, and small structure preservation for ultra-low-dose CBCT following Image Gently Protocol. This method has potential to provide sufficient anatomical representation on the enhanced CBCT images for accurate treatment delivery and potentially fast online-adaptive re-planning for HN cancer patients.


Assuntos
Aprendizado Profundo , Neoplasias de Cabeça e Pescoço , Criança , Tomografia Computadorizada de Feixe Cônico/métodos , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Neoplasias de Cabeça e Pescoço/radioterapia , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Planejamento da Radioterapia Assistida por Computador/métodos , Estudos Retrospectivos
16.
IEEE Trans Med Imaging ; 41(3): 680-689, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34652998

RESUMO

Direct reconstruction methods have been developed to estimate parametric images directly from the measured PET sinograms by combining the PET imaging model and tracer kinetics in an integrated framework. Due to limited counts received, signal-to-noise-ratio (SNR) and resolution of parametric images produced by direct reconstruction frameworks are still limited. Recently supervised deep learning methods have been successfully applied to medical imaging denoising/reconstruction when large number of high-quality training labels are available. For static PET imaging, high-quality training labels can be acquired by extending the scanning time. However, this is not feasible for dynamic PET imaging, where the scanning time is already long enough. In this work, we proposed an unsupervised deep learning framework for direct parametric reconstruction from dynamic PET, which was tested on the Patlak model and the relative equilibrium Logan model. The training objective function was based on the PET statistical model. The patient's anatomical prior image, which is readily available from PET/CT or PET/MR scans, was supplied as the network input to provide a manifold constraint, and also utilized to construct a kernel layer to perform non-local feature denoising. The linear kinetic model was embedded in the network structure as a 1 ×1 ×1 convolution layer. Evaluations based on dynamic datasets of 18F-FDG and 11C-PiB tracers show that the proposed framework can outperform the traditional and the kernel method-based direct reconstruction methods.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Algoritmos , Fluordesoxiglucose F18 , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Razão Sinal-Ruído
17.
J Nucl Med ; 63(8): 1274-1281, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-34795014

RESUMO

Quantitative dynamic PET with compartmental modeling has the potential to enable multiparametric imaging and more accurate quantification than static PET imaging. Conventional methods for parametric imaging commonly use a single kinetic model for all image voxels and neglect the heterogeneity of physiologic models, which can work well for single-organ parametric imaging but may significantly compromise total-body parametric imaging on a scanner with a long axial field of view. In this paper, we evaluate the necessity of voxelwise compartmental modeling strategies, including time delay correction (TDC) and model selection, for total-body multiparametric imaging. Methods: Ten subjects (5 patients with metastatic cancer and 5 healthy volunteers) were scanned on a total-body PET/CT system after injection of 370 MBq of 18F-FDG. Dynamic data were acquired for 60 min. Total-body parametric imaging was performed using 2 approaches. One was the conventional method that uses a single irreversible 2-tissue-compartment model with and without TDC. The second approach selects the best kinetic model from 3 candidate models for individual voxels. The differences between the 2 approaches were evaluated for parametric imaging of microkinetic parameters and the 18F-FDG net influx rate, KiResults: TDC had a nonnegligible effect on kinetic quantification of various organs and lesions. The effect was larger in lesions with a higher blood volume. Parametric imaging of Ki with the standard 2-tissue-compartment model introduced vascular-region artifacts, which were overcome by the voxelwise model selection strategy. Conclusion: The time delay and appropriate kinetic model vary in different organs and lesions. Modeling of the time delay of the blood input function and model selection improved total-body multiparametric imaging.


Assuntos
Fluordesoxiglucose F18 , Neoplasias , Algoritmos , Humanos , Neoplasias/diagnóstico por imagem , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Tomografia por Emissão de Pósitrons/métodos
18.
IEEE Trans Med Imaging ; 41(5): 1230-1241, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-34928789

RESUMO

Respiratory motion is one of the main sources of motion artifacts in positron emission tomography (PET) imaging. The emission image and patient motion can be estimated simultaneously from respiratory gated data through a joint estimation framework. However, conventional motion estimation methods based on registration of a pair of images are sensitive to noise. The goal of this study is to develop a robust joint estimation method that incorporates a deep learning (DL)-based image registration approach for motion estimation. We propose a joint estimation framework by incorporating a learned image registration network into a regularized PET image reconstruction. The joint estimation was formulated as a constrained optimization problem with moving gated images related to a fixed image via the deep neural network. The constrained optimization problem is solved by the alternating direction method of multipliers (ADMM) algorithm. The effectiveness of the algorithm was demonstrated using simulated and real data. We compared the proposed DL-ADMM joint estimation algorithm with a monotonic iterative joint estimation. Motion compensated reconstructions using pre-calculated deformation fields by DL-based (DL-MC recon) and iterative (iterative-MC recon) image registration were also included for comparison. Our simulation study shows that the proposed DL-ADMM joint estimation method reduces bias compared to the ungated image without increasing noise and outperforms the competing methods. In the real data study, our proposed method also generated higher lesion contrast and sharper liver boundaries compared to the ungated image and had lower noise than the reference gated image.


Assuntos
Aprendizado Profundo , Algoritmos , Artefatos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Movimento (Física) , Tomografia por Emissão de Pósitrons/métodos
19.
Phys Med Biol ; 66(21)2021 10 19.
Artigo em Inglês | MEDLINE | ID: mdl-34607324

RESUMO

OBJECTIVE: Dual-ended readout depth-encoding detectors based on bismuth germanate (BGO) scintillation crystal arrays are good candidates for high-sensitivity small animal positron emission tomography used for very-low-dose imaging. In this paper, the performance of three dual-ended readout detectors based on 15 × 15 BGO arrays with three different reflector arrangements and 8 × 8 silicon photomultiplier arrays were evaluated and compared. APPROACH: The three BGO arrays, denoted wo-ILG (without internal light guide), wp-ILG (with partial internal light guide), and wf-ILG (with full internal light guide), share a pitch size of 1.6 mm and thickness of 20 mm. Toray E60 with a thickness of 50µm was used as inter-crystal reflector. All reflector lengths in the wo-ILG and wf-ILG BGO arrays were 20 and 18 mm, respectively; the reflectors in the wp-ILG BGO array were 18 mm at the central region of the array and 20 mm at the edge. By using 18 mm reflectors, part of the crystals in the wp-ILG and wf-ILG BGO arrays worked as internal light guides. MAIN RESULTS: The results showed that the detector based on the wo-ILG BGO array provided the best flood histogram. The energy, timing and DOI resolutions of the three detectors were similar. The energy resolutions full width at half maximum (FWHM value) based on the wo-ILG, wp-ILG and wf-ILG BGO arrays were 27.2 ± 3.9%, 28.7 ± 4.6%, and 29.5 ± 4.7%, respectively. The timing resolutions (FWHM value) were 4.7 ± 0.5 ns, 4.9 ± 0.5 ns, and 5.0 ± 0.6 ns, respectively. The DOI resolution (FWHM value) were 3.0 ± 0.2 mm, 2.9 ± 0.2 mm, and 3.0 ± 0.2 mm, respectively. Over all, the wo-ILG detector provided the best performance.


Assuntos
Germânio , Animais , Bismuto , Tomografia por Emissão de Pósitrons/métodos
20.
Med Phys ; 48(9): 5244-5258, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34129690

RESUMO

PURPOSE: The developments of PET/CT and PET/MR scanners provide opportunities for improving PET image quality by using anatomical information. In this paper, we propose a novel co-learning three-dimensional (3D) convolutional neural network (CNN) to extract modality-specific features from PET/CT image pairs and integrate complementary features into an iterative reconstruction framework to improve PET image reconstruction. METHODS: We used a pretrained deep neural network to represent PET images. The network was trained using low-count PET and CT image pairs as inputs and high-count PET images as labels. This network was then incorporated into a constrained maximum likelihood framework to regularize PET image reconstruction. Two different network structures were investigated for the integration of anatomical information from CT images. One was a multichannel CNN, which treated PET and CT volumes as separate channels of the input. The other one was multibranch CNN, which implemented separate encoders for PET and CT images to extract latent features and fed the combined latent features into a decoder. Using computer-based Monte Carlo simulations and two real patient datasets, the proposed method has been compared with existing methods, including the maximum likelihood expectation maximization (MLEM) reconstruction, a kernel-based reconstruction and a CNN-based deep penalty method with and without anatomical guidance. RESULTS: Reconstructed images showed that the proposed constrained ML reconstruction approach produced higher quality images than the competing methods. The tumors in the lung region have higher contrast in the proposed constrained ML reconstruction than in the CNN-based deep penalty reconstruction. The image quality was further improved by incorporating the anatomical information. Moreover, the liver standard deviation was lower in the proposed approach than all the competing methods at a matched lesion contrast. CONCLUSIONS: The supervised co-learning strategy can improve the performance of constrained maximum likelihood reconstruction. Compared with existing techniques, the proposed method produced a better lesion contrast versus background standard deviation trade-off curve, which can potentially improve lesion detection.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Humanos , Redes Neurais de Computação , Tomografia por Emissão de Pósitrons , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA