Your browser doesn't support javascript.
loading
MR Template-Based Individual Brain PET Volumes-of-Interest Generation Neither Using MR nor Using Spatial Normalization.
Seo, Seung Yeon; Oh, Jungsu S; Chung, Jinwha; Kim, Seog-Young; Kim, Jae Seung.
Afiliação
  • Seo SY; Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympicro-43 Rd, Songpa-gu, Seoul, 05505 South Korea.
  • Oh JS; Department of Biomedical Engineering, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea.
  • Chung J; Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympicro-43 Rd, Songpa-gu, Seoul, 05505 South Korea.
  • Kim SY; Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympicro-43 Rd, Songpa-gu, Seoul, 05505 South Korea.
  • Kim JS; Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea.
Nucl Med Mol Imaging ; 57(2): 73-85, 2023 Apr.
Article em En | MEDLINE | ID: mdl-36998592
ABSTRACT
For more anatomically precise quantitation of mouse brain PET, spatial normalization (SN) of PET onto MR template and subsequent template volumes-of-interest (VOIs)-based analysis are commonly used. Although this leads to dependency on the corresponding MR and the process of SN, routine preclinical/clinical PET images cannot always afford corresponding MR and relevant VOIs. To resolve this issue, we propose a deep learning (DL)-based individual-brain-specific VOIs (i.e., cortex, hippocampus, striatum, thalamus, and cerebellum) directly generated from PET images using the inverse-spatial-normalization (iSN)-based VOI labels and deep convolutional neural network model (deep CNN). Our technique was applied to mutated amyloid precursor protein and presenilin-1 mouse model of Alzheimer's disease. Eighteen mice underwent T2-weighted MRI and 18F FDG PET scans before and after the administration of human immunoglobin or antibody-based treatments. To train the CNN, PET images were used as inputs and MR iSN-based target VOIs as labels. Our devised methods achieved decent performance in terms of not only VOI agreements (i.e., Dice similarity coefficient) but also the correlation of mean counts and SUVR, and CNN-based VOIs was highly accordant with ground-truth (the corresponding MR and MR template-based VOIs). Moreover, the performance metrics were comparable to that of VOI generated by MR-based deep CNN. In conclusion, we established a novel quantitative analysis method both MR-less and SN-less fashion to generate individual brain space VOIs using MR template-based VOIs for PET image quantification. Supplementary Information The online version contains supplementary material available at 10.1007/s13139-022-00772-4.
Palavras-chave

Texto completo: 1 Bases de dados: MEDLINE Idioma: En Revista: Nucl Med Mol Imaging Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Bases de dados: MEDLINE Idioma: En Revista: Nucl Med Mol Imaging Ano de publicação: 2023 Tipo de documento: Article