Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
Assunto da revista
Intervalo de ano de publicação
1.
IEEE Trans Cybern ; PP2023 Sep 07.
Artigo em Inglês | MEDLINE | ID: mdl-37676810

RESUMO

Object detection techniques have been widely studied, utilized in various works, and have exhibited robust performance on images with sufficient luminance. However, these approaches typically struggle to extract valuable features from low-luminance images, which often exhibit blurriness and dim appearence, leading to detection failures. To overcome this issue, we introduce an innovative unsupervised feature domain knowledge distillation (KD) framework. The proposed framework enhances the generalization capability of neural networks across both low-and high-luminance domains without incurring additional computational costs during testing. This improvement is made possible through the integration of generative adversarial networks and our proposed unsupervised KD process. Furthermore, we introduce a region-based multiscale discriminator designed to discern feature domain discrepancies at the object level rather than from the global context. This bolsters the joint learning process of object detection and feature domain distillation tasks. Both qualitative and quantitative assessments shown that the proposed method, empowered by the region-based multiscale discriminator and the unsupervised feature domain distillation process, can effectively extract beneficial features from low-luminance images, outperforming other state-of-the-art approaches in both low-and sufficient-luminance domains.

2.
IEEE Trans Pattern Anal Mach Intell ; 43(8): 2623-2633, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-32149681

RESUMO

In the past half of the decade, object detection approaches based on the convolutional neural network have been widely studied and successfully applied in many computer vision applications. However, detecting objects in inclement weather conditions remains a major challenge because of poor visibility. In this article, we address the object detection problem in the presence of fog by introducing a novel dual-subnet network (DSNet) that can be trained end-to-end and jointly learn three tasks: visibility enhancement, object classification, and object localization. DSNet attains complete performance improvement by including two subnetworks: detection subnet and restoration subnet. We employ RetinaNet as a backbone network (also called detection subnet), which is responsible for learning to classify and locate objects. The restoration subnet is designed by sharing feature extraction layers with the detection subnet and adopting a feature recovery (FR) module for visibility enhancement. Experimental results show that our DSNet achieved 50.84 percent mean average precision (mAP) on a synthetic foggy dataset that we composed and 41.91 percent mAP on a public natural foggy dataset (Foggy Driving dataset), outperforming many state-of-the-art object detectors and combination models between dehazing and detection methods while maintaining a high speed.

3.
Artigo em Inglês | MEDLINE | ID: mdl-29994633

RESUMO

Existing learning-based atmospheric particle-removal approaches such as those used for rainy and hazy images are designed with strong assumptions regarding spatial frequency, trajectory, and translucency. However, the removal of snow particles is more complicated because they possess additional attributes of particle size and shape, and these attributes may vary within a single image. Currently, hand-crafted features are still the mainstream for snow removal, making significant generalization difficult to achieve. In response, we have designed a multistage network named DesnowNet to in turn deal with the removal of translucent and opaque snow particles. We also differentiate snow attributes of translucency and chromatic aberration for accurate estimation. Moreover, our approach individually estimates residual complements of the snow-free images to recover details obscured by opaque snow. Additionally, a multi-scale design is utilized throughout the entire network to model the diversity of snow. As demonstrated in the qualitative and quantitative experiments, our approach outperforms state-of-the-art learning-based atmospheric phenomena removal methods and one semantic segmentation baseline on the proposed Snow100K dataset. The results indicate our network would benefit applications involving computer vision and graphics.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA