Your browser doesn't support javascript.
loading
Detection of Road Crack Images Based on Multistage Feature Fusion and a Texture Awareness Method.
Guo, Maozu; Tian, Wenbo; Li, Yang; Sui, Dong.
Afiliação
  • Guo M; School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture, Beijing 102616, China.
  • Tian W; Beijing Key Laboratory for Intelligent Processing Methods of Architectural Big Data, Beijing University of Civil Engineering and Architecture, Beijing 102616, China.
  • Li Y; School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture, Beijing 102616, China.
  • Sui D; Beijing Key Laboratory for Intelligent Processing Methods of Architectural Big Data, Beijing University of Civil Engineering and Architecture, Beijing 102616, China.
Sensors (Basel) ; 24(11)2024 May 21.
Article em En | MEDLINE | ID: mdl-38894061
ABSTRACT
Structural health monitoring for roads is an important task that supports inspection of transportation infrastructure. This paper explores deep learning techniques for crack detection in road images and proposes an automatic pixel-level semantic road crack image segmentation method based on a Swin transformer. This method employs Swin-T as the backbone network to extract feature information from crack images at various levels and utilizes the texture unit to extract the texture and edge characteristic information of cracks. The refinement attention module (RAM) and panoramic feature module (PFM) then merge these diverse features, ultimately refining the segmentation results. This method is called FetNet. We collect four public real-world datasets and conduct extensive experiments, comparing FetNet with various deep-learning methods. FetNet achieves the highest precision of 90.4%, a recall of 85.3%, an F1 score of 87.9%, and a mean intersection over union of 78.6% on the Crack500 dataset. The experimental results show that the FetNet approach surpasses other advanced models in terms of crack segmentation accuracy and exhibits excellent generalizability for use in complex scenes.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article