DesTrans: A medical image fusion method based on Transformer and improved DenseNet.
Comput Biol Med
; 174: 108463, 2024 May.
Article
de En
| MEDLINE
| ID: mdl-38640634
ABSTRACT
Medical image fusion can provide doctors with more detailed data and thus improve the accuracy of disease diagnosis. In recent years, deep learning has been widely used in the field of medical image fusion. The traditional method of medical image fusion is to operate by superimposing and other methods of pixels. The introduction of deep learning methods has improved the effectiveness of medical image fusion. However, these methods still have problems such as edge blurring and information redundancy. In this paper, we propose a deep learning network model based on Transformer and an improved DenseNet network module integration that can be applied to medical images and solve the above problems. At the same time, the method can be moved to natural images. The use of Transformer and dense concatenation enhances the feature extraction capability of the method by limiting the feature loss which reduces the risk of edge blurring. We compared several representative traditional methods and more advanced deep learning methods with this method. The experimental results show that the Transformer and the improved DenseNet network module have a strong capability of feature extraction. The method yields good results both in terms of visual quality and objective image evaluation metrics.
Mots clés
Texte intégral:
1
Collection:
01-internacional
Base de données:
MEDLINE
Sujet principal:
Apprentissage profond
Limites:
Humans
Langue:
En
Journal:
Comput Biol Med
/
Comput. biol. med
/
Computers in biology and medicine
Année:
2024
Type de document:
Article
Pays d'affiliation:
Chine
Pays de publication:
États-Unis d'Amérique