Your browser doesn't support javascript.
loading
A Vision-Language Model-Based Traffic Sign Detection Method for High-Resolution Drone Images: A Case Study in Guyuan, China.
Yao, Jianqun; Li, Jinming; Li, Yuxuan; Zhang, Mingzhu; Zuo, Chen; Dong, Shi; Dai, Zhe.
Afiliação
  • Yao J; CCCC Infrastructure Maintenance Group Co., Ltd., Beijing 100011, China.
  • Li J; CCCC Infrastructure Maintenance Group Co., Ltd., Beijing 100011, China.
  • Li Y; CCCC Infrastructure Maintenance Group Co., Ltd., Beijing 100011, China.
  • Zhang M; School of Transportation Engineering, Chang'an University, Xi'an 710064, China.
  • Zuo C; School of Transportation Engineering, Chang'an University, Xi'an 710064, China.
  • Dong S; School of Transportation Engineering, Chang'an University, Xi'an 710064, China.
  • Dai Z; School of Transportation Engineering, Chang'an University, Xi'an 710064, China.
Sensors (Basel) ; 24(17)2024 Sep 06.
Article em En | MEDLINE | ID: mdl-39275711
ABSTRACT
As a fundamental element of the transportation system, traffic signs are widely used to guide traffic behaviors. In recent years, drones have emerged as an important tool for monitoring the conditions of traffic signs. However, the existing image processing technique is heavily reliant on image annotations. It is time consuming to build a high-quality dataset with diverse training images and human annotations. In this paper, we introduce the utilization of Vision-language Models (VLMs) in the traffic sign detection task. Without the need for discrete image labels, the rapid deployment is fulfilled by the multi-modal learning and large-scale pretrained networks. First, we compile a keyword dictionary to explain traffic signs. The Chinese national standard is used to suggest the shape and color information. Our program conducts Bootstrapping Language-image Pretraining v2 (BLIPv2) to translate representative images into text descriptions. Second, a Contrastive Language-image Pretraining (CLIP) framework is applied to characterize not only drone images but also text descriptions. Our method utilizes the pretrained encoder network to create visual features and word embeddings. Third, the category of each traffic sign is predicted according to the similarity between drone images and keywords. Cosine distance and softmax function are performed to calculate the class probability distribution. To evaluate the performance, we apply the proposed method in a practical application. The drone images captured from Guyuan, China, are employed to record the conditions of traffic signs. Further experiments include two widely used public datasets. The calculation results indicate that our vision-language model-based method has an acceptable prediction accuracy and low training cost.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article