Your browser doesn't support javascript.
loading
DTFusion: Infrared and Visible Image Fusion Based on Dense Residual PConv-ConvNeXt and Texture-Contrast Compensation.
Zhou, Xinzhi; He, Min; Zhou, Dongming; Xu, Feifei; Jeon, Seunggil.
Afiliação
  • Zhou X; School of Information, Yunnan University, Kunming 650504, China.
  • He M; School of Information, Yunnan University, Kunming 650504, China.
  • Zhou D; School of Information, Yunnan University, Kunming 650504, China.
  • Xu F; School of Information, Yunnan University, Kunming 650504, China.
  • Jeon S; Samsung Electronics Co., Ltd., 129 Samseong-ro, Yeongtong-gu, Suwon-si 16677, Republic of Korea.
Sensors (Basel) ; 24(1)2023 Dec 29.
Article em En | MEDLINE | ID: mdl-38203065
ABSTRACT
Infrared and visible image fusion aims to produce an informative fused image for the same scene by integrating the complementary information from two source images. Most deep-learning-based fusion networks utilize small kernel-size convolution to extract features from a local receptive field or design unlearnable fusion strategies to fuse features, which limits the feature representation capabilities and fusion performance of the network. Therefore, a novel end-to-end infrared and visible image fusion framework called DTFusion is proposed to address these problems. A residual PConv-ConvNeXt module (RPCM) and dense connections are introduced into the encoder network to efficiently extract features with larger receptive fields. In addition, a texture-contrast compensation module (TCCM) with gradient residuals and an attention mechanism is designed to compensate for the texture details and contrast of features. The fused features are reconstructed through four convolutional layers to generate a fused image with rich scene information. Experiments on public datasets show that DTFusion outperforms other state-of-the-art fusion methods in both subjective vision and objective metrics.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article