Your browser doesn't support javascript.
loading
Evaluation of Deep Learning-Based Approaches to Segment Bowel Air Pockets and Generate Pelvic Attenuation Maps from CAIPIRINHA-Accelerated Dixon MR Images.
Sari, Hasan; Reaungamornrat, Ja; Catalano, Onofrio A; Vera-Olmos, Javier; Izquierdo-Garcia, David; Morales, Manuel A; Torrado-Carvajal, Angel; Ng, Thomas S C; Malpica, Norberto; Kamen, Ali; Catana, Ciprian.
Afiliação
  • Sari H; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, Massachusetts.
  • Reaungamornrat J; Siemens Corporate Research, Princeton, New Jersey.
  • Catalano OA; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, Massachusetts.
  • Vera-Olmos J; Medical Image Analysis and Biometry Lab, Universidad Rey Juan Carlos, Madrid, Spain; and.
  • Izquierdo-Garcia D; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, Massachusetts.
  • Morales MA; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, Massachusetts.
  • Torrado-Carvajal A; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, Massachusetts.
  • Ng TSC; Medical Image Analysis and Biometry Lab, Universidad Rey Juan Carlos, Madrid, Spain; and.
  • Malpica N; Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts.
  • Kamen A; Medical Image Analysis and Biometry Lab, Universidad Rey Juan Carlos, Madrid, Spain; and.
  • Catana C; Siemens Corporate Research, Princeton, New Jersey.
J Nucl Med ; 63(3): 468-475, 2022 03.
Article em En | MEDLINE | ID: mdl-34301782
Attenuation correction remains a challenge in pelvic PET/MRI. In addition to the segmentation/model-based approaches, deep learning methods have shown promise in synthesizing accurate pelvic attenuation maps (µ-maps). However, these methods often misclassify air pockets in the digestive tract, potentially introducing bias in the reconstructed PET images. The aims of this work were to develop deep learning-based methods to automatically segment air pockets and generate pseudo-CT images from CAIPIRINHA-accelerated MR Dixon images. Methods: A convolutional neural network (CNN) was trained to segment air pockets using 3-dimensional CAIPIRINHA-accelerated MR Dixon datasets from 35 subjects and was evaluated against semiautomated segmentations. A separate CNN was trained to synthesize pseudo-CT µ-maps from the Dixon images. Its accuracy was evaluated by comparing the deep learning-, model-, and CT-based µ-maps using data from 30 of the subjects. Finally, the impact of different µ-maps and air pocket segmentation methods on the PET quantification was investigated. Results: Air pockets segmented using the CNN agreed well with semiautomated segmentations, with a mean Dice similarity coefficient of 0.75. The volumetric similarity score between 2 segmentations was 0.85 ± 0.14. The mean absolute relative changes with respect to the CT-based µ-maps were 2.6% and 5.1% in the whole pelvis for the deep learning-based and model-based µ-maps, respectively. The average relative change between PET images reconstructed with deep learning-based and CT-based µ-maps was 2.6%. Conclusion: We developed a deep learning-based method to automatically segment air pockets from CAIPIRINHA-accelerated Dixon images, with accuracy comparable to that of semiautomatic segmentations. The µ-maps synthesized using a deep learning-based method from CAIPIRINHA-accelerated Dixon images were more accurate than those generated with the model-based approach available on integrated PET/MRI scanners.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Aprendizado Profundo Tipo de estudo: Prognostic_studies Limite: Humans Idioma: En Ano de publicação: 2022 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Aprendizado Profundo Tipo de estudo: Prognostic_studies Limite: Humans Idioma: En Ano de publicação: 2022 Tipo de documento: Article