Your browser doesn't support javascript.
loading
BAFusion: Bidirectional Attention Fusion for 3D Object Detection Based on LiDAR and Camera.
Liu, Min; Jia, Yuanjun; Lyu, Youhao; Dong, Qi; Yang, Yanyu.
Afiliação
  • Liu M; Institute of Advanced Technology, University of Science and Technology of China, Hefei 230088, China.
  • Jia Y; China Academy of Electronics and Information Technology, Beijing 100041, China.
  • Lyu Y; Institute of Advanced Technology, University of Science and Technology of China, Hefei 230088, China.
  • Dong Q; China Academy of Electronics and Information Technology, Beijing 100041, China.
  • Yang Y; China Academy of Electronics and Information Technology, Beijing 100041, China.
Sensors (Basel) ; 24(14)2024 Jul 20.
Article em En | MEDLINE | ID: mdl-39066115
ABSTRACT
3D object detection is a challenging and promising task for autonomous driving and robotics, benefiting significantly from multi-sensor fusion, such as LiDAR and cameras. Conventional methods for sensor fusion rely on a projection matrix to align the features from LiDAR and cameras. However, these methods often suffer from inadequate flexibility and robustness, leading to lower alignment accuracy under complex environmental conditions. Addressing these challenges, in this paper, we propose a novel Bidirectional Attention Fusion module, named BAFusion, which effectively fuses the information from LiDAR and cameras using cross-attention. Unlike the conventional methods, our BAFusion module can adaptively learn the cross-modal attention weights, making the approach more flexible and robust. Moreover, drawing inspiration from advanced attention optimization techniques in 2D vision, we developed the Cross Focused Linear Attention Fusion Layer (CFLAF Layer) and integrated it into our BAFusion pipeline. This layer optimizes the computational complexity of attention mechanisms and facilitates advanced interactions between image and point cloud data, showcasing a novel approach to addressing the challenges of cross-modal attention calculations. We evaluated our method on the KITTI dataset using various baseline networks, such as PointPillars, SECOND, and Part-A2, and demonstrated consistent improvements in 3D object detection performance over these baselines, especially for smaller objects like cyclists and pedestrians. Our approach achieves competitive results on the KITTI benchmark.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Sensors (Basel) Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Sensors (Basel) Ano de publicação: 2024 Tipo de documento: Article