Your browser doesn't support javascript.
loading
MR-CT image fusion method of intracranial tumors based on Res2Net.
Chen, Wei; Li, Qixuan; Zhang, Heng; Sun, Kangkang; Sun, Wei; Jiao, Zhuqing; Ni, Xinye.
Affiliation
  • Chen W; School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou, 213164, China.
  • Li Q; Department of Radiotherapy, The Affiliated Changzhou NO. 2 People's Hospital of Nanjing Medical University, Changzhou, 213003, China.
  • Zhang H; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China.
  • Sun K; Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China.
  • Sun W; Department of Radiotherapy, The Affiliated Changzhou NO. 2 People's Hospital of Nanjing Medical University, Changzhou, 213003, China.
  • Jiao Z; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China.
  • Ni X; Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China.
BMC Med Imaging ; 24(1): 169, 2024 Jul 08.
Article in En | MEDLINE | ID: mdl-38977957
ABSTRACT

BACKGROUND:

Information complementarity can be achieved by fusing MR and CT images, and fusion images have abundant soft tissue and bone information, facilitating accurate auxiliary diagnosis and tumor target delineation.

PURPOSE:

The purpose of this study was to construct high-quality fusion images based on the MR and CT images of intracranial tumors by using the Residual-Residual Network (Res2Net) method.

METHODS:

This paper proposes an MR and CT image fusion method based on Res2Net. The method comprises three components feature extractor, fusion layer, and reconstructor. The feature extractor utilizes the Res2Net framework to extract multiscale features from source images. The fusion layer incorporates a fusion strategy based on spatial mean attention, adaptively adjusting fusion weights for feature maps at each position to preserve fine details from the source images. Finally, fused features are input into the feature reconstructor to reconstruct a fused image.

RESULTS:

Qualitative results indicate that the proposed fusion method exhibits clear boundary contours and accurate localization of tumor regions. Quantitative results show that the method achieves average gradient, spatial frequency, entropy, and visual information fidelity for fusion metrics of 4.6771, 13.2055, 1.8663, and 0.5176, respectively. Comprehensive experimental results demonstrate that the proposed method preserves more texture details and structural information in fused images than advanced fusion algorithms, reducing spectral artifacts and information loss and performing better in terms of visual quality and objective metrics.

CONCLUSION:

The proposed method effectively combines MR and CT image information, allowing the precise localization of tumor region boundaries, assisting clinicians in clinical diagnosis.
Subject(s)
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Brain Neoplasms / Magnetic Resonance Imaging / Tomography, X-Ray Computed Limits: Humans Language: En Journal: BMC Med Imaging Journal subject: DIAGNOSTICO POR IMAGEM Year: 2024 Type: Article Affiliation country: China

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Brain Neoplasms / Magnetic Resonance Imaging / Tomography, X-Ray Computed Limits: Humans Language: En Journal: BMC Med Imaging Journal subject: DIAGNOSTICO POR IMAGEM Year: 2024 Type: Article Affiliation country: China