Your browser doesn't support javascript.
loading
DesTrans: A medical image fusion method based on Transformer and improved DenseNet.
Song, Yumeng; Dai, Yin; Liu, Weibin; Liu, Yue; Liu, Xinpeng; Yu, Qiming; Liu, Xinghan; Que, Ningfeng; Li, Mingzhe.
Affiliation
  • Song Y; College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China.
  • Dai Y; College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China; Engineering Center on Medical Imaging and Intelligent Analysis, Ministry Education, Northeastern University, Shenyang 110169, China. Electronic address: daiyin@bmie.neu.edu.cn.
  • Liu W; College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China.
  • Liu Y; College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China.
  • Liu X; College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China.
  • Yu Q; College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China.
  • Liu X; College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China.
  • Que N; College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China.
  • Li M; College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China.
Comput Biol Med ; 174: 108463, 2024 May.
Article de En | MEDLINE | ID: mdl-38640634
ABSTRACT
Medical image fusion can provide doctors with more detailed data and thus improve the accuracy of disease diagnosis. In recent years, deep learning has been widely used in the field of medical image fusion. The traditional method of medical image fusion is to operate by superimposing and other methods of pixels. The introduction of deep learning methods has improved the effectiveness of medical image fusion. However, these methods still have problems such as edge blurring and information redundancy. In this paper, we propose a deep learning network model based on Transformer and an improved DenseNet network module integration that can be applied to medical images and solve the above problems. At the same time, the method can be moved to natural images. The use of Transformer and dense concatenation enhances the feature extraction capability of the method by limiting the feature loss which reduces the risk of edge blurring. We compared several representative traditional methods and more advanced deep learning methods with this method. The experimental results show that the Transformer and the improved DenseNet network module have a strong capability of feature extraction. The method yields good results both in terms of visual quality and objective image evaluation metrics.
Sujet(s)
Mots clés

Texte intégral: 1 Collection: 01-internacional Base de données: MEDLINE Sujet principal: Apprentissage profond Limites: Humans Langue: En Journal: Comput Biol Med / Comput. biol. med / Computers in biology and medicine Année: 2024 Type de document: Article Pays d'affiliation: Chine Pays de publication: États-Unis d'Amérique

Texte intégral: 1 Collection: 01-internacional Base de données: MEDLINE Sujet principal: Apprentissage profond Limites: Humans Langue: En Journal: Comput Biol Med / Comput. biol. med / Computers in biology and medicine Année: 2024 Type de document: Article Pays d'affiliation: Chine Pays de publication: États-Unis d'Amérique