Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 22(15)2022 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-35957224

RESUMO

Pedestrian and vehicle detection plays a key role in the safe driving of autonomous vehicles. Although transformer-based object detection algorithms have made great progress, the accuracy of detection in rainy scenarios is still challenging. Based on the Swin Transformer, this paper proposes an end-to-end pedestrian and vehicle detection algorithm (PVformer) with deraining module, which improves the image quality and detection accuracy in rainy scenes. Based on Transformer blocks, a four-branch feature mapping model was introduced to achieve deraining from a single image, thereby mitigating the influence of rain streak occlusion on the detector performance. According to the trouble of small object detection only by visual transformer, we designed a local enhancement perception block based on CNN and Transformer. In addition, the deraining module and the detection module were combined to train the PVformer model through transfer learning. The experimental results show that the algorithm performed well on rainy days and significantly improved the accuracy of pedestrian and vehicle detection.


Assuntos
Condução de Veículo , Pedestres , Algoritmos , Coleta de Dados , Humanos , Chuva
2.
Sensors (Basel) ; 22(18)2022 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-36146173

RESUMO

Computer vision technology is increasingly being used in areas such as intelligent security and autonomous driving. Users need accurate and reliable visual information, but the images obtained under severe weather conditions are often disturbed by rainy weather, causing image scenes to look blurry. Many current single image deraining algorithms achieve good performance but have limitations in retaining detailed image information. In this paper, we design a Scale-space Feature Recalibration Network (SFR-Net) for single image deraining. The proposed network improves the image feature extraction and characterization capability of a Multi-scale Extraction Recalibration Block (MERB) using dilated convolution with different convolution kernel sizes, which results in rich multi-scale rain streaks features. In addition, we develop a Subspace Coordinated Attention Mechanism (SCAM) and embed it into MERB, which combines coordinated attention recalibration and a subspace attention mechanism to recalibrate the rain streaks feature information learned from the feature extraction phase and eliminate redundant feature information to enhance the transfer of important feature information. Meanwhile, the overall SFR-Net structure uses dense connection and cross-layer feature fusion to repeatedly utilize the feature maps, thus enhancing the understanding of the network and avoiding gradient disappearance. Through extensive experiments on synthetic and real datasets, the proposed method outperforms the recent state-of-the-art deraining algorithms in terms of both the rain removal effect and the preservation of image detail information.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos
3.
Sensors (Basel) ; 22(24)2022 Dec 07.
Artigo em Inglês | MEDLINE | ID: mdl-36559956

RESUMO

Images captured in bad weather are not conducive to visual tasks. Rain streaks in rainy images will significantly affect the regular operation of imaging equipment; to solve this problem, using multiple neural networks is a trend. The ingenious integration of network structures allows for full use of the powerful representation and fitting abilities of deep learning to complete low-level visual tasks. In this study, we propose a generative adversarial network (GAN) with multiple attention mechanisms for image rain removal tasks. Firstly, to the best of our knowledge, we propose a pretrained vision transformer (ViT) as the discriminator in GAN for single-image rain removal for the first time. Secondly, we propose a neural network training method that can use a small amount of data for training while maintaining promising results and reliable visual quality. A large number of experiments prove the correctness and effectiveness of our method. Our proposed method achieves better results on synthetic and real image datasets than multiple state-of-the-art methods, even when using less training data.


Assuntos
Fontes de Energia Elétrica , Conhecimento , Redes Neurais de Computação , Chuva , Tempo (Meteorologia) , Processamento de Imagem Assistida por Computador
4.
Sensors (Basel) ; 21(16)2021 Aug 06.
Artigo em Inglês | MEDLINE | ID: mdl-34450762

RESUMO

Recently, deep learning-based image deblurring and deraining have been well developed. However, most of these methods fail to distill the useful features. What is more, exploiting the detailed image features in a deep learning framework always requires a mass of parameters, which inevitably makes the network suffer from a high computational burden. We propose a lightweight fusion distillation network (LFDN) for image deblurring and deraining to solve the above problems. The proposed LFDN is designed as an encoder-decoder architecture. In the encoding stage, the image feature is reduced to various small-scale spaces for multi-scale information extraction and fusion without much information loss. Then, a feature distillation normalization block is designed at the beginning of the decoding stage, which enables the network to distill and screen valuable channel information of feature maps continuously. Besides, an information fusion strategy between distillation modules and feature channels is also carried out by the attention mechanism. By fusing different information in the proposed approach, our network can achieve state-of-the-art image deblurring and deraining results with a smaller number of parameters and outperform the existing methods in model complexity.


Assuntos
Processamento de Imagem Assistida por Computador , Armazenamento e Recuperação da Informação
5.
Sensors (Basel) ; 20(6)2020 Mar 12.
Artigo em Inglês | MEDLINE | ID: mdl-32178420

RESUMO

Capturing images under rainy days degrades image visual quality and affects analysis tasks, such as object detection and classification. Therefore, image de-raining has attracted a lot of attention in recent years. In this paper, an improved generative adversarial network for single image de-raining is proposed. According to the principles of divide-and-conquer, we divide an image de-raining task into rain locating, rain removing, and detail refining sub-tasks. A multi-stream DenseNet, termed as Rain Estimation Network, is proposed to estimate the rain location map. A Generative Adversarial Network is proposed to remove the rain streaks. A Refinement Network is proposed to refine the details. These three models accomplish rain locating, rain removing, and detail refining sub-tasks, respectively. Experiments on two synthetic datasets and real world images demonstrate that the proposed method outperforms state-of-the-art de-raining studies in both objective and subjective measurements.

6.
Sensors (Basel) ; 20(15)2020 Jul 27.
Artigo em Inglês | MEDLINE | ID: mdl-32726915

RESUMO

Image-to-image conversion based on deep learning techniques is a topic of interest in the fields of robotics and computer vision. A series of typical tasks, such as applying semantic labels to building photos, edges to photos, and raining to de-raining, can be seen as paired image-to-image conversion problems. In such problems, the image generation network learns from the information in the form of input images. The input images and the corresponding targeted images must share the same basic structure to perfectly generate target-oriented output images. However, the shared basic structure between paired images is not as ideal as assumed, which can significantly affect the output of the generating model. Therefore, we propose a novel Input-Perceptual and Reconstruction Adversarial Network (IP-RAN) as an all-purpose framework for imperfect paired image-to-image conversion problems. We demonstrate, through the experimental results, that our IP-RAN method significantly outperforms the current state-of-the-art techniques.

7.
Neural Netw ; 178: 106428, 2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38901091

RESUMO

In overcoming the challenges faced in adapting to paired real-world data, recent unsupervised single image deraining (SID) methods have proven capable of accomplishing notably acceptable deraining performance. However, the previous methods usually fail to produce a high quality rain-free image due to neglecting sufficient attention to semantic representation and the image content, which results in the inability to completely separate the content from the rain layer. In this paper, we develop a novel cycle contrastive adversarial framework for unsupervised SID, which mainly consists of cycle contrastive learning (CCL) and location contrastive learning (LCL). Specifically, CCL achieves high-quality image reconstruction and rain-layer stripping by pulling similar features together while pushing dissimilar features further in both semantic and discriminant latent spaces. Meanwhile, LCL implicitly constrains the mutual information of the same location of different exemplars to maintain the content information. In addition, recently inspired by the powerful Segment Anything Model (SAM) that can effectively extract widely applicable semantic structural details, we formulate a structural-consistency regularization to fine-tune our network using SAM. Apart from this, we attempt to introduce vision transformer (VIT) into our network architecture to further improve the performance. In our designed transformer-based GAN, to obtain a stronger representation, we propose a multi-layer channel compression attention module (MCCAM) to extract a richer feature. Equipped with the above techniques, our proposed unsupervised SID algorithm, called CCLformer, can show advantageous image deraining performance. Extensive experiments demonstrate both the superiority of our method and the effectiveness of each module in CCLformer. The code is available at https://github.com/zhihefang/CCLGAN.

8.
Math Biosci Eng ; 20(7): 12240-12262, 2023 May 18.
Artigo em Inglês | MEDLINE | ID: mdl-37501441

RESUMO

The recognition of traffic signs is of great significance to intelligent driving and traffic systems. Most current traffic sign recognition algorithms do not consider the impact of rainy weather. The rain marks will obscure the recognition target in the image, which will lead to the performance degradation of the algorithm, a problem that has yet to be solved. In order to improve the accuracy of traffic sign recognition in rainy weather, we propose a rainy traffic sign recognition algorithm. The algorithm in this paper includes two modules. First, we propose an image deraining algorithm based on the Progressive multi-scale residual network (PMRNet), which uses a multi-scale residual structure to extract features of different scales, so as to improve the utilization rate of the algorithm for information, combined with the Convolutional long-short term memory (ConvLSTM) network to enhance the algorithm's ability to extract rain mark features. Second, we use the CoT-YOLOv5 algorithm to recognize traffic signs on the recovered images. In this paper, in order to improve the performance of YOLOv5 (You-Only-Look-Once, YOLO), the 3 × 3 convolution in the feature extraction module is replaced by the Contextual Transformer (CoT) module to make up for the lack of global modeling capability of Convolutional Neural Network (CNN), thus improving the recognition accuracy. The experimental results show that the deraining algorithm based on PMRNet can effectively remove rain marks, and the evaluation indicators Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) are better than the other representative algorithms. The mean Average Precision (mAP) of the CoT-YOLOv5 algorithm on the TT100k datasets reaches 92.1%, which is 5% higher than the original YOLOv5.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa