Your browser doesn't support javascript.
loading
Cross-Attention for Improved Motion Correction in Brain PET.
Cai, Zhuotong; Zeng, Tianyi; Lieffrig, Eléonore V; Zhang, Jiazhen; Chen, Fuyao; Toyonaga, Takuya; You, Chenyu; Xin, Jingmin; Zheng, Nanning; Lu, Yihuan; Duncan, James S; Onofrey, John A.
Affiliation
  • Cai Z; Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, China.
  • Zeng T; Department of Radiology & Biomedical Imaging, New Haven, CT, USA.
  • Lieffrig EV; Department of Biomedical Engineering, New Haven, CT, USA.
  • Zhang J; Department of Radiology & Biomedical Imaging, New Haven, CT, USA.
  • Chen F; Department of Radiology & Biomedical Imaging, New Haven, CT, USA.
  • Toyonaga T; Department of Biomedical Engineering, New Haven, CT, USA.
  • You C; Department of Biomedical Engineering, New Haven, CT, USA.
  • Xin J; Department of Radiology & Biomedical Imaging, New Haven, CT, USA.
  • Zheng N; Department of Electrical Engineering, New Haven, CT, USA.
  • Lu Y; Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, China.
  • Duncan JS; Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, China.
  • Onofrey JA; United Imaging Healthcare, Shanghai, China yihuan.lu@united-imaging.com.
Mach Learn Clin Neuroimaging (2023) ; 14312: 34-45, 2023 Oct.
Article de En | MEDLINE | ID: mdl-38174216
ABSTRACT
Head movement during long scan sessions degrades the quality of reconstruction in positron emission tomography (PET) and introduces artifacts, which limits clinical diagnosis and treatment. Recent deep learning-based motion correction work utilized raw PET list-mode data and hardware motion tracking (HMT) to learn head motion in a supervised manner. However, motion prediction results were not robust to testing subjects outside the training data domain. In this paper, we integrate a cross-attention mechanism into the supervised deep learning network to improve motion correction across test subjects. Specifically, cross-attention learns the spatial correspondence between the reference images and moving images to explicitly focus the model on the most correlative inherent information - the head region the motion correction. We validate our approach on brain PET data from two different scanners HRRT without time of flight (ToF) and mCT with ToF. Compared with traditional and deep learning benchmarks, our network improved the performance of motion correction by 58% and 26% in translation and rotation, respectively, in multi-subject testing in HRRT studies. In mCT studies, our approach improved performance by 66% and 64% for translation and rotation, respectively. Our results demonstrate that cross-attention has the potential to improve the quality of brain PET image reconstruction without the dependence on HMT. All code will be released on GitHub https//github.com/OnofreyLab/dl_hmc_attention_mlcn2023.
Mots clés

Texte intégral: 1 Collection: 01-internacional Base de données: MEDLINE Type d'étude: Prognostic_studies Langue: En Journal: Mach Learn Clin Neuroimaging (2023) Année: 2023 Type de document: Article Pays d'affiliation: Chine

Texte intégral: 1 Collection: 01-internacional Base de données: MEDLINE Type d'étude: Prognostic_studies Langue: En Journal: Mach Learn Clin Neuroimaging (2023) Année: 2023 Type de document: Article Pays d'affiliation: Chine