Your browser doesn't support javascript.
loading
Priors-assisted dehazing network with attention supervision and detail preservation.
Yi, Weichao; Dong, Liquan; Liu, Ming; Hui, Mei; Kong, Lingqin; Zhao, Yuejin.
Afiliação
  • Yi W; Beijing Key Laboratory for Precision Optoelectronic Measurement Instrument and Technology, China; School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China. Electronic address: 3120215346@bit.edu.cn.
  • Dong L; Beijing Key Laboratory for Precision Optoelectronic Measurement Instrument and Technology, China; School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China; Yangtze Delta Region Academy of Beijing Institute of Technology, Jiaxing, 314019, China. Electronic address: kylin
  • Liu M; Beijing Key Laboratory for Precision Optoelectronic Measurement Instrument and Technology, China; School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China; Yangtze Delta Region Academy of Beijing Institute of Technology, Jiaxing, 314019, China. Electronic address: bit41
  • Hui M; Beijing Key Laboratory for Precision Optoelectronic Measurement Instrument and Technology, China; School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China. Electronic address: huim@bit.edu.cn.
  • Kong L; Beijing Key Laboratory for Precision Optoelectronic Measurement Instrument and Technology, China; School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China; Yangtze Delta Region Academy of Beijing Institute of Technology, Jiaxing, 314019, China. Electronic address: kongl
  • Zhao Y; Beijing Key Laboratory for Precision Optoelectronic Measurement Instrument and Technology, China; School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China; Yangtze Delta Region Academy of Beijing Institute of Technology, Jiaxing, 314019, China. Electronic address: yjzha
Neural Netw ; 173: 106165, 2024 May.
Article em En | MEDLINE | ID: mdl-38340469
ABSTRACT
Single image dehazing is a challenging computer vision task for other high-level applications, e.g., object detection, navigation, and positioning systems. Recently, most existing dehazing methods have followed a "black box" recovery paradigm that obtains the haze-free image from its corresponding hazy input by network learning. Unfortunately, these algorithms ignore the effective utilization of relevant image priors and non-uniform haze distribution problems, causing insufficient or excessive dehazing performance. In addition, they pay little attention to image detail preservation during the dehazing process, thus inevitably producing blurry results. To address the above problems, we propose a novel priors-assisted dehazing network (called PADNet), which fully explores relevant image priors from two new perspectives attention supervision and detail preservation. For one thing, we leverage the dark channel prior to constrain the attention map generation that denotes the haze pixel position information, thereby better extracting non-uniform feature distributions from hazy images. For another, we find that the residual channel prior of the hazy images contains rich structural information, so it is natural to incorporate it into our dehazing architecture to preserve more structural detail information. Furthermore, since the attention map and dehazed image are simultaneously predicted during the convergence of our model, a self-paced semi-curriculum learning strategy is utilized to alleviate the learning ambiguity. Extensive quantitative and qualitative experiments on several benchmark datasets demonstrate that our PADNet can perform favorably against existing state-of-the-art methods. The code will be available at https//github.com/leandepk/PADNet.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Algoritmos / Benchmarking Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Algoritmos / Benchmarking Idioma: En Ano de publicação: 2024 Tipo de documento: Article