Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros

Banco de datos
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
Entropy (Basel) ; 25(6)2023 Jun 09.
Artículo en Inglés | MEDLINE | ID: mdl-37372258

RESUMEN

When traditional super-resolution reconstruction methods are applied to infrared thermal images, they often ignore the problem of poor image quality caused by the imaging mechanism, which makes it difficult to obtain high-quality reconstruction results even with the training of simulated degraded inverse processes. To address these issues, we proposed a thermal infrared image super-resolution reconstruction method based on multimodal sensor fusion, aiming to enhance the resolution of thermal infrared images and rely on multimodal sensor information to reconstruct high-frequency details in the images, thereby overcoming the limitations of imaging mechanisms. First, we designed a novel super-resolution reconstruction network, which consisted of primary feature encoding, super-resolution reconstruction, and high-frequency detail fusion subnetwork, to enhance the resolution of thermal infrared images and rely on multimodal sensor information to reconstruct high-frequency details in the images, thereby overcoming limitations of imaging mechanisms. We designed hierarchical dilated distillation modules and a cross-attention transformation module to extract and transmit image features, enhancing the network's ability to express complex patterns. Then, we proposed a hybrid loss function to guide the network in extracting salient features from thermal infrared images and reference images while maintaining accurate thermal information. Finally, we proposed a learning strategy to ensure the high-quality super-resolution reconstruction performance of the network, even in the absence of reference images. Extensive experimental results show that the proposed method exhibits superior reconstruction image quality compared to other contrastive methods, demonstrating its effectiveness.

2.
Entropy (Basel) ; 25(8)2023 Aug 12.
Artículo en Inglés | MEDLINE | ID: mdl-37628231

RESUMEN

Low-illumination image enhancement technology is a topic of interest in the field of image processing. However, while improving image brightness, it is difficult to effectively maintain the texture and details of the image, and the quality of the image cannot be guaranteed. In order to solve this problem, this paper proposed a low-illumination enhancement method based on structural and detail layers. Firstly, we designed an SRetinex-Net model. The network is mainly divided into two parts: a decomposition module and an enhancement module. Second, the decomposition module mainly adopts the SU-Net structure, which is an unsupervised network that decomposes the input image into a structural layer image and detail layer image. Afterward, the enhancement module mainly adopts the SDE-Net structure, which is divided into two branches: the SDE-S branch and the SDE-D branch. The SDE-S branch mainly enhances and adjusts the brightness of the structural layer image through Ehnet and Adnet to prevent insufficient or overexposed brightness enhancement in the image. The SDE-D branch is mainly denoised and enhanced with textural details through a denoising module. This network structure can greatly reduce computational costs. Moreover, we also improved the total variation optimization model as a mixed loss function and added structural metrics and textural metrics as variables on the basis of the original loss function, which can well separate the structure edge and texture edge. Numerous experiments have shown that our structure has a more significant impact on the brightness and detail preservation of image restoration.

3.
Entropy (Basel) ; 25(7)2023 Jul 04.
Artículo en Inglés | MEDLINE | ID: mdl-37509969

RESUMEN

Infrared pedestrian target detection is affected by factors such as the low resolution and contrast of infrared pedestrian images, as well as the complexity of the background and the presence of multiple targets occluding each other, resulting in indistinct target features. To address these issues, this paper proposes a method to enhance the accuracy of pedestrian target detection by employing contour information to guide multi-scale feature detection. This involves analyzing the shapes and edges of the targets in infrared images at different scales to more accurately identify and differentiate them from the background and other targets. First, we propose a preprocessing method to suppress background interference and extract color information from visible images. Second, we propose an information fusion residual block combining a U-shaped structure and residual connection to form a feature extraction network. Then, we propose an attention mechanism based on a contour information-guided approach to guide the network to extract the depth features of pedestrian targets. Finally, we use the clustering method of mIoU to generate anchor frame sizes applicable to the KAIST pedestrian dataset and propose a hybrid loss function to enhance the network's adaptability to pedestrian targets. The extensive experimental results show that the method proposed in this paper outperforms other comparative algorithms in pedestrian detection, proving its superiority.

4.
Entropy (Basel) ; 24(12)2022 Nov 30.
Artículo en Inglés | MEDLINE | ID: mdl-36554164

RESUMEN

The goal of infrared and visible image fusion in the night scene is to generate a fused image containing salient targets and rich textural details. However, the existing image fusion methods fail to take the unevenness of nighttime luminance into account. To address the above issue, an infrared and visible image fusion method for highlighting salient targets in the night scene is proposed. First of all, a global attention module is designed, which rescales the weights of different channels after capturing global contextual information. Second, the loss function is divided into the foreground loss and the background loss, forcing the fused image to retain rich texture details while highlighting the salient targets. Finally, a luminance estimation function is introduced to obtain the trade-off control parameters of the foreground loss function based on the nighttime luminance. It can effectively highlight salient targets by retaining the foreground information from the source images. Compared with other advanced methods, the experimental results adequately demonstrate the excellent fusion performance and generalization of the proposed method.

5.
Neural Netw ; 173: 106184, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38387204

RESUMEN

Colorizing thermal infrared images poses a significant challenge as current methods struggle with issues such as unrealistic color saturation and limited texture. To address these challenges, we propose the Feature Refinement and Adaptive Generative Adversarial Network (FRAGAN). Our approach enhances the detailed, semantic, and contextual capabilities of image coloring by combining multi-level interactions that integrate the lost detailed information from the encoding stage with the semantic information from the decoding stage. Additionally, we introduce the Residual Feature Refinement Module (RFRM) to improve both the accuracy and generalization ability of the model, thereby elevating the quality of colorization results. The Feature Adaptation Module (FAM) is employed to mitigate sub-region information loss during downsampling. Furthermore, we introduce the Trinity Attention Module (TAM) to accurately capture the spatial and channel-wise interaction features of local semantic information. Extensive experimentation on the KAIST dataset and the FLIR dataset demonstrates the superiority of our proposed FRAGAN methodology, surpassing both the performance metrics and visual quality of current state-of-the-art methods. The colorized images generated by our proposed FRAGAN exhibit enhanced clarity and realism. Our code and models are available at GitHub.


Asunto(s)
Benchmarking , Generalización Psicológica , Investigación Empírica , Semántica , Procesamiento de Imagen Asistido por Computador
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA