RESUMO
Head motion presents a continuing problem in brain PET studies. A wealth of motion correction (MC) algorithms had been proposed in the past, including both hardware-based methods and data-driven methods. However, in most real brain PET studies, in the absence of ground truth or gold standard of motion information, it is challenging to objectively evaluate MC quality. For MC evaluation, image-domain metrics, e.g., standardized uptake value (SUV) change before and after MC are commonly used, but this measure lacks objectivity because 1) other factors, e.g., attenuation correction, scatter correction and parameters used in the reconstruction, will confound MC effectiveness; 2) SUV only reflects final image quality, and it cannot precisely inform when an MC method performed well or poorly during the scan time period; 3) SUV is tracer-dependent and head motion may cause increases or decreases in SUV for different tracers, so evaluating MC effectiveness is complicated. Here, we present a new algorithm, i.e., motion corrected centroid-of-distribution (MCCOD) to perform objective quality control for measured or estimated rigid motion information. MCCOD is a three-dimensional surrogate trace of the center of tracer distribution after performing rigid MC using the existing motion information. MCCOD is used to inform whether the motion information is accurate, using the PET raw data only, i.e., without PET image reconstruction, where inaccurate motion information typically leads to abrupt changes in the MCCOD trace. MCCOD was validated using simulation studies and was tested on real studies acquired from both time-of-flight (TOF) and non-TOF scanners. A deep learning-based brain mask segmentation was implemented, which is shown to be necessary for non-TOF MCCOD generation. MCCOD is shown to be effective in detecting abrupt translation motion errors in slowly varying tracer distribution caused by the motion tracking hardware and can be used to compare different motion estimation methods as well as to improve existing motion information.
Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Movimento (Física) , Algoritmos , Encéfalo/diagnóstico por imagemRESUMO
Head motion during PET scans causes image quality degradation, decreased concentration in regions with high uptake and incorrect outcome measures from kinetic analysis of dynamic datasets. Previously, we proposed a data-driven method, center of tracer distribution (COD), to detect head motion without an external motion tracking device. There, motion was detected using one dimension of the COD trace with a semiautomatic detection algorithm, requiring multiple user defined parameters and manual intervention. In this study, we developed a new data-driven motion detection algorithm, which is automatic, self-adaptive to noise level, does not require user-defined parameters and uses all three dimensions of the COD trace (3DCOD). 3DCOD was first validated and tested using 30 simulation studies (18F-FDG, N = 15; 11C-raclopride (RAC), N = 15) with large motion. The proposed motion correction method was tested on 22 real human datasets, with 20 acquired from a high resolution research tomograph (HRRT) scanner (18F-FDG, N = 10; 11C-RAC, N = 10) and 2 acquired from the Siemens Biograph mCT scanner. Real-time hardware-based motion tracking information (Vicra) was available for all real studies and was used as the gold standard. 3DCOD was compared to Vicra, no motion correction (NMC), one-direction COD (our previous method called 1DCOD) and two conventional frame-based image registration (FIR) algorithms, i.e., FIR1 (based on predefined frames reconstructed with attenuation correction) and FIR2 (without attenuation correction) for both simulation and real studies. For the simulation studies, 3DCOD yielded -2.3 ± 1.4% (mean ± standard deviation across all subjects and 11 brain regions) error in region of interest (ROI) uptake for 18F-FDG (-3.4 ± 1.7% for 11C-RAC across all subjects and 2 regions) as compared to Vicra (perfect correction) while NMC, FIR1, FIR2 and 1DCOD yielded -25.4 ± 11.1% (-34.5 ± 16.1% for 11C- RAC), -13.4 ± 3.5% (-16.1 ± 4.6%), -5.7 ± 3.6% (-8.0 ± 4.5%) and -2.6 ± 1.5% (-5.1 ± 2.7%), respectively. For real HRRT studies, 3DCOD yielded -0.3 ± 2.8% difference for 18F-FDG (-0.4 ± 3.2% for 11C-RAC) as compared to Vicra while NMC, FIR1, FIR2 and 1DCOD yielded -14.9 ± 9.0% (-24.5 ± 14.6%), -3.6 ± 4.9% (-13.4 ± 14.3%), -0.6 ± 3.4% (-6.7 ± 5.3%) and -1.5 ± 4.2% (-2.2 ± 4.1%), respectively. In summary, the proposed motion correction method yielded comparable performance to the hardware-based motion tracking method for multiple tracers, including very challenging cases with large frequent head motion, in studies performed on a non-TOF scanner.
Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Algoritmos , Encéfalo/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Cinética , Movimento (Física) , Movimento , Tomografia por Emissão de Pósitrons/métodosRESUMO
A novel deep learning (DL)-based attenuation correction (AC) framework was applied to clinical whole-body oncology studies using 18F-FDG, 68 Ga-DOTATATE, and 18F-Fluciclovine. The framework used activity (λ-MLAA) and attenuation (µ-MLAA) maps estimated by the maximum likelihood reconstruction of activity and attenuation (MLAA) algorithm as inputs to a modified U-net neural network with a novel imaging physics-based loss function to learn a CT-derived attenuation map (µ-CT). METHODS: Clinical whole-body PET/CT datasets of 18F-FDG (N = 113), 68 Ga-DOTATATE (N = 76), and 18F-Fluciclovine (N = 90) were used to train and test tracer-specific neural networks. For each tracer, forty subjects were used to train the neural network to predict attenuation maps (µ-DL). µ-DL and µ-MLAA were compared to the gold-standard µ-CT. PET images reconstructed using the OSEM algorithm with µ-DL (OSEMDL) and µ-MLAA (OSEMMLAA) were compared to the CT-based reconstruction (OSEMCT). Tumor regions of interest were segmented by two radiologists and tumor SUV and volume measures were reported, as well as evaluation using conventional image analysis metrics. RESULTS: µ-DL yielded high resolution and fine detail recovery of the attenuation map, which was superior in quality as compared to µ-MLAA in all metrics for all tracers. Using OSEMCT as the gold-standard, OSEMDL provided more accurate tumor quantification than OSEMMLAA for all three tracers, e.g., error in SUVmax for OSEMMLAA vs. OSEMDL: - 3.6 ± 4.4% vs. - 1.7 ± 4.5% for 18F-FDG (N = 152), - 4.3 ± 5.1% vs. 0.4 ± 2.8% for 68 Ga-DOTATATE (N = 70), and - 7.3 ± 2.9% vs. - 2.8 ± 2.3% for 18F-Fluciclovine (N = 44). OSEMDL also yielded more accurate tumor volume measures than OSEMMLAA, i.e., - 8.4 ± 14.5% (OSEMMLAA) vs. - 3.0 ± 15.0% for 18F-FDG, - 14.1 ± 19.7% vs. 1.8 ± 11.6% for 68 Ga-DOTATATE, and - 15.9 ± 9.1% vs. - 6.4 ± 6.4% for 18F-Fluciclovine. CONCLUSIONS: The proposed framework provides accurate and robust attenuation correction for whole-body 18F-FDG, 68 Ga-DOTATATE and 18F-Fluciclovine in tumor SUV measures as well as tumor volume estimation. The proposed method provides clinically equivalent quality as compared to CT in attenuation correction for the three tracers.
Assuntos
Aprendizado Profundo , Neoplasias , Fluordesoxiglucose F18 , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Tomografia por Emissão de Pósitrons , Cintilografia , Compostos RadiofarmacêuticosRESUMO
Head motion occurring during brain positron emission tomography images acquisition leads to a decrease in image quality and induces quantification errors. We have previously introduced a Deep Learning Head Motion Correction (DL-HMC) method based on supervised learning of gold-standard Polaris Vicra motion tracking device and showed the potential of this method. In this study, we upgrade our network to a multi-task architecture in order to include image appearance prediction in the learning process. This multi-task Deep Learning Head Motion Correction (mtDL-HMC) model was trained on 21 subjects and showed enhanced motion prediction performance compared to our previous DL-HMC method on both quantitative and qualitative results for 5 testing subjects. We also evaluate the trustworthiness of network predictions by performing Monte Carlo Dropout at inference on testing subjects. We discard the data associated with a great motion prediction uncertainty and show that this does not harm the quality of reconstructed images, and can even improve it.
RESUMO
Objective.Head motion correction (MC) is an essential process in brain positron emission tomography (PET) imaging. We have used the Polaris Vicra, an optical hardware-based motion tracking (HMT) device, for PET head MC. However, this requires attachment of a marker to the subject's head. Markerless HMT (MLMT) methods are more convenient for clinical translation than HMT with external markers. In this study, we validated the United Imaging Healthcare motion tracking (UMT) MLMT system using phantom and human point source studies, and tested its effectiveness on eight18F-FPEB and four11C-LSN3172176 human studies, with frame-based region of interest (ROI) analysis. We also proposed an evaluation metric, registration quality (RQ), and compared it to a data-driven evaluation method, motion-corrected centroid-of-distribution (MCCOD).Approach.UMT utilized a stereovision camera with infrared structured light to capture the subject's real-time 3D facial surface. Each point cloud, acquired at up to 30 Hz, was registered to the reference cloud using a rigid-body iterative closest point registration algorithm.Main results.In the phantom point source study, UMT exhibited superior reconstruction results than the Vicra with higher spatial resolution (0.35 ± 0.27 mm) and smaller residual displacements (0.12 ± 0.10 mm). In the human point source study, UMT achieved comparable performance as Vicra on spatial resolution with lower noise. Moreover, UMT achieved comparable ROI values as Vicra for all the human studies, with negligible mean standard uptake value differences, while no MC results showed significant negative bias. TheRQevaluation metric demonstrated the effectiveness of UMT and yielded comparable results to MCCOD.Significance.We performed an initial validation of a commercial MLMT system against the Vicra. Generally, UMT achieved comparable motion-tracking results in all studies and the effectiveness of UMT-based MC was demonstrated.
Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Cabeça/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Movimento (Física) , Imagens de Fantasmas , Algoritmos , MovimentoRESUMO
Head movement is a major limitation in brain positron emission tomography (PET) imaging, which results in image artifacts and quantification errors. Head motion correction plays a critical role in quantitative image analysis and diagnosis of nervous system diseases. However, to date, there is no approach that can track head motion continuously without using an external device. Here, we develop a deep learning-based algorithm to predict rigid motion for brain PET by lever-aging existing dynamic PET scans with gold-standard motion measurements from external Polaris Vicra tracking. We propose a novel Deep Learning for Head Motion Correction (DL-HMC) methodology that consists of three components: (i) PET input data encoder layers; (ii) regression layers to estimate the six rigid motion transformation parameters; and (iii) feature-wise transformation (FWT) layers to condition the network to tracer time-activity. The input of DL-HMC is sampled pairs of one-second 3D cloud representations of the PET data and the output is the prediction of six rigid transformation motion parameters. We trained this network in a supervised manner using the Vicra motion tracking information as gold-standard. We quantitatively evaluate DL-HMC by comparing to gold-standard Vicra measurements and qualitatively evaluate the reconstructed images as well as perform region of interest standard uptake value (SUV) measurements. An algorithm ablation study was performed to determine the contributions of each of our DL-HMC design choices to network performance. Our results demonstrate accurate motion prediction performance for brain PET using a data-driven registration approach without external motion tracking hardware. All code is publicly available on GitHub: https://github.com/OnofreyLab/dl-hmc_miccai2022.
RESUMO
Head motion degrades image quality and causes erroneous parameter estimates in tracer kinetic modeling in brain PET studies. Existing motion correction methods include frame-based image registration (FIR) and correction using real-time hardware-based motion tracking (HMT) information. However, FIR cannot correct for motion within 1 predefined scan period, and HMT is not readily available in the clinic since it typically requires attaching a tracking device to the patient. In this study, we propose a motion correction framework with a data-driven algorithm, that is, using the PET raw data itself, to address these limitations. Methods: We propose a data-driven algorithm, centroid of distribution (COD), to detect head motion. In COD, the central coordinates of the line of response of all events are averaged over 1-s intervals to generate a COD trace. A point-to-point change in the COD trace in 1 direction that exceeded a user-defined threshold was defined as a time point of head motion, which was followed by manually adding additional motion time points. All the frames defined by such time points were reconstructed without attenuation correction and rigidly registered to a reference frame. The resulting transformation matrices were then used to perform the final motion-compensated reconstruction. We applied the new COD framework to 23 human dynamic datasets, all containing large head motion, with 18F-FDG (n = 13) and 11C-UCB-J ((R)-1-((3-(11C-methyl-11C)pyridin-4-yl)methyl)-4-(3,4,5-trifluorophenyl)pyrrolidin-2-one) (n = 10) and compared its performance with FIR and with HMT using Vicra (an optical HMT device), which can be considered the gold standard. Results: The COD method yielded a 1.0% ± 3.2% (mean ± SD across all subjects and 12 gray matter regions) SUV difference for 18F-FDG (3.7% ± 5.4% for 11C-UCB-J) compared with HMT, whereas no motion correction (NMC) and FIR yielded -15.7% ± 12.2% (-20.5% ± 15.8%) and -4.7% ± 6.9% (-6.2% ± 11.0%), respectively. For 18F-FDG dynamic studies, COD yielded differences of 3.6% ± 10.9% in Ki value as compared with HMT, whereas NMC and FIR yielded -18.0% ± 39.2% and -2.6% ± 19.8%, respectively. For 11C-UCB-J, COD yielded 3.7% ± 5.2% differences in VT compared with HMT, whereas NMC and FIR yielded -20.0% ± 12.5% and -5.3% ± 9.4%, respectively. Conclusion: The proposed COD-based data-driven motion correction method outperformed FIR and achieved comparable or even better performance than the Vicra HMT method in both static and dynamic studies.