Your browser doesn't support javascript.
loading
DCFNet: Infrared and Visible Image Fusion Network Based on Discrete Wavelet Transform and Convolutional Neural Network.
Wu, Dan; Wang, Yanzhi; Wang, Haoran; Wang, Fei; Gao, Guowang.
Afiliación
  • Wu D; School of Electronic Engineering, Xi'an Shiyou University, Xi'an 710312, China.
  • Wang Y; School of Electronic Engineering, Xi'an Shiyou University, Xi'an 710312, China.
  • Wang H; School of Electronic Engineering, Xi'an Shiyou University, Xi'an 710312, China.
  • Wang F; School of Electronic Engineering, Xi'an Shiyou University, Xi'an 710312, China.
  • Gao G; School of Electronic Engineering, Xi'an Shiyou University, Xi'an 710312, China.
Sensors (Basel) ; 24(13)2024 Jun 22.
Article en En | MEDLINE | ID: mdl-39000844
ABSTRACT
Aiming to address the issues of missing detailed information, the blurring of significant target information, and poor visual effects in current image fusion algorithms, this paper proposes an infrared and visible-light image fusion algorithm based on discrete wavelet transform and convolutional neural networks. Our backbone network is an autoencoder. A DWT layer is embedded in the encoder to optimize frequency-domain feature extraction and prevent information loss, and a bottleneck residual block and a coordinate attention mechanism are introduced to enhance the ability to capture and characterize the low- and high-frequency feature information; an IDWT layer is embedded in the decoder to achieve the feature reconstruction of the fused frequencies; the fusion strategy adopts the l1-norm fusion strategy to integrate the encoder's output frequency mapping features; a weighted loss containing pixel loss, gradient loss, and structural loss is constructed for optimizing network training. DWT decomposes the image into sub-bands at different scales, including low-frequency sub-bands and high-frequency sub-bands. The low-frequency sub-bands contain the structural information of the image, which corresponds to the important target information, while the high-frequency sub-bands contain the detail information, such as edge and texture information. Through IDWT, the low-frequency sub-bands that contain important target information are synthesized with the high-frequency sub-bands that enhance the details, ensuring that the important target information and texture details are clearly visible in the reconstructed image. The whole process is able to reconstruct the information of different frequency sub-bands back into the image non-destructively, so that the fused image appears natural and harmonious visually. Experimental results on public datasets show that the fusion algorithm performs well according to both subjective and objective evaluation criteria and that the fused image is clearer and contains more scene information, which verifies the effectiveness of the algorithm, and the results of the generalization experiments also show that our network has good generalization ability.
Palabras clave

Texto completo: 1 Banco de datos: MEDLINE Idioma: En Revista: Sensors (Basel) Año: 2024 Tipo del documento: Article País de afiliación: China

Texto completo: 1 Banco de datos: MEDLINE Idioma: En Revista: Sensors (Basel) Año: 2024 Tipo del documento: Article País de afiliación: China