Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
Mais filtros

Bases de dados
Ano de publicação
Tipo de documento
Assunto da revista
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(14)2024 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-39066115

RESUMO

3D object detection is a challenging and promising task for autonomous driving and robotics, benefiting significantly from multi-sensor fusion, such as LiDAR and cameras. Conventional methods for sensor fusion rely on a projection matrix to align the features from LiDAR and cameras. However, these methods often suffer from inadequate flexibility and robustness, leading to lower alignment accuracy under complex environmental conditions. Addressing these challenges, in this paper, we propose a novel Bidirectional Attention Fusion module, named BAFusion, which effectively fuses the information from LiDAR and cameras using cross-attention. Unlike the conventional methods, our BAFusion module can adaptively learn the cross-modal attention weights, making the approach more flexible and robust. Moreover, drawing inspiration from advanced attention optimization techniques in 2D vision, we developed the Cross Focused Linear Attention Fusion Layer (CFLAF Layer) and integrated it into our BAFusion pipeline. This layer optimizes the computational complexity of attention mechanisms and facilitates advanced interactions between image and point cloud data, showcasing a novel approach to addressing the challenges of cross-modal attention calculations. We evaluated our method on the KITTI dataset using various baseline networks, such as PointPillars, SECOND, and Part-A2, and demonstrated consistent improvements in 3D object detection performance over these baselines, especially for smaller objects like cyclists and pedestrians. Our approach achieves competitive results on the KITTI benchmark.

2.
Sensors (Basel) ; 24(5)2024 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-38474994

RESUMO

Graph neural networks (GNNs) have been proven to be an ideal approach to deal with irregular point clouds, but involve massive computations for searching neighboring points in the graph, which limits their application in large-scale LiDAR point cloud processing. Down-sampling is a straightforward and indispensable step in current GNN-based 3D detectors to reduce the computational burden of the model, but the commonly used down-sampling methods cannot distinguish the categories of the LiDAR points, which leads to an inability to effectively improve the computational efficiency of the GNN models without affecting their detection accuracy. In this paper, we propose (1) a LiDAR point cloud pre-segmented down-sampling (PSD) method that can selectively reduce background points while preserving the foreground object points during the process, greatly improving the computational efficiency of the model without affecting its 3D detection accuracy. (2) A lightweight GNN-based 3D detector that can extract point features and detect objects from the raw down-sampled LiDAR point cloud directly without any pre-transformation. We test the proposed model on the KITTI 3D Object Detection Benchmark, and the results demonstrate its effectiveness and efficiency for autonomous driving 3D object detection.

3.
Sensors (Basel) ; 24(6)2024 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-38544067

RESUMO

Perception plays a crucial role in ensuring the safety and reliability of autonomous driving systems. However, the recognition and localization of small objects in complex scenarios still pose challenges. In this paper, we propose a point cloud object detection method based on dynamic sparse voxelization to enhance the detection performance of small objects. This method employs a specialized point cloud encoding network to learn and generate pseudo-images from point cloud features. The feature extraction part uses sliding windows and transformer-based methods. Furthermore, multi-scale feature fusion is performed to enhance the granularity of small object information. In this experiment, the term "small object" refers to objects such as cyclists and pedestrians, which have fewer pixels compared to vehicles with more pixels, as well as objects of poorer quality in terms of detection. The experimental results demonstrate that, compared to the PointPillars algorithm and other related algorithms on the KITTI public dataset, the proposed algorithm exhibits improved detection accuracy for cyclist and pedestrian target objects. In particular, there is notable improvement in the detection accuracy of objects in the moderate and hard quality categories, with an overall average increase in accuracy of about 5%.

4.
Sensors (Basel) ; 24(9)2024 May 06.
Artigo em Inglês | MEDLINE | ID: mdl-38733047

RESUMO

LiDAR-based 3D object detection and localization are crucial components of autonomous navigation systems, including autonomous vehicles and mobile robots. Most existing LiDAR-based 3D object detection and localization approaches primarily use geometric or structural feature abstractions from LiDAR point clouds. However, these approaches can be susceptible to environmental noise due to adverse weather conditions or the presence of highly scattering media. In this work, we propose an intensity-aware voxel encoder for robust 3D object detection. The proposed voxel encoder generates an intensity histogram that describes the distribution of point intensities within a voxel and is used to enhance the voxel feature set. We integrate this intensity-aware encoder into an efficient single-stage voxel-based detector for 3D object detection. Experimental results obtained using the KITTI dataset show that our method achieves comparable results with respect to the state-of-the-art method for car objects in 3D detection and from a bird's-eye view and superior results for pedestrian and cyclic objects. Furthermore, our model can achieve a detection rate of 40.7 FPS during inference time, which is higher than that of the state-of-the-art methods and incurs a lower computational cost.

5.
Sensors (Basel) ; 24(14)2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-39065844

RESUMO

The bird's-eye view (BEV) method, which is a vision-centric representation-based perception task, is essential and promising for future Autonomous Vehicle perception. It has advantages of fusion-friendly, intuitive, end-to-end optimization and is cheaper than LiDAR. The performance of existing BEV methods, however, would be deteriorated under the situation of a tire blow-out. This is because they quite rely on accurate camera calibration which may be disabled by noisy camera parameters during blow-out. Therefore, it is extremely unsafe to use existing BEV methods in the tire blow-out situation. In this paper, we propose a geometry-guided auto-resizable kernel transformer (GARKT) method, which is designed especially for vehicles with tire blow-out. Specifically, we establish a camera deviation model for vehicles with tire blow-out. Then we use the geometric priors to attain the prior position in perspective view with auto-resizable kernels. The resizable perception areas are encoded and flattened to generate BEV representation. GARKT predicts the nuScenes detection score (NDS) with a value of 0.439 on a newly created blow-out dataset based on nuScenes. NDS can still obtain 0.431 when the tire is completely flat, which is much more robust compared to other transformer-based BEV methods. Moreover, the GARKT method has almost real-time computing speed, with about 20.5 fps on one GPU.

6.
Sensors (Basel) ; 24(14)2024 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-39066000

RESUMO

In autonomous driving, the fusion of multiple sensors is considered essential to improve the accuracy and safety of 3D object detection. Currently, a fusion scheme combining low-cost cameras with highly robust radars can counteract the performance degradation caused by harsh environments. In this paper, we propose the IRBEVF-Q model, which mainly consists of BEV (Bird's Eye View) fusion coding module and an object decoder module.The BEV fusion coding module solves the problem of unified representation of different modal information by fusing the image and radar features through 3D spatial reference points as a medium. The query in the object decoder, as a core component, plays an important role in detection. In this paper, Heat Map-Guided Query Initialization (HGQI) and Dynamic Position Encoding (DPE) are proposed in query construction to increase the a priori information of the query. The Auxiliary Noise Query (ANQ) then helps to stabilize the matching. The experimental results demonstrate that the proposed fusion model IRBEVF-Q achieves an NDS of 0.575 and a mAP of 0.476 on the nuScenes test set. Compared to recent state-of-the-art methods, our model shows significant advantages, thus indicating that our approach contributes to improving detection accuracy.

7.
Sensors (Basel) ; 24(14)2024 Jul 18.
Artigo em Inglês | MEDLINE | ID: mdl-39066063

RESUMO

Recently, the growing demand for autonomous driving in the industry has led to a lot of interest in 3D object detection, resulting in many excellent 3D object detection algorithms. However, most 3D object detectors focus only on a single set of LiDAR points, ignoring their potential ability to improve performance by leveraging the information provided by the consecutive set of LIDAR points. In this paper, we propose a novel 3D object detection method called temporal motion-aware 3D object detection (TM3DOD), which utilizes temporal LiDAR data. In the proposed TM3DOD method, we aggregate LiDAR voxels over time and the current BEV features by generating motion features using consecutive BEV feature maps. First, we present the temporal voxel encoder (TVE), which generates voxel representations by capturing the temporal relationships among the point sets within a voxel. Next, we design a motion-aware feature aggregation network (MFANet), which aims to enhance the current BEV feature representation by quantifying the temporal variation between two consecutive BEV feature maps. By analyzing the differences and changes in the BEV feature maps over time, MFANet captures motion information and integrates it into the current feature representation, enabling more robust and accurate detection of 3D objects. Experimental evaluations on the nuScenes benchmark dataset demonstrate that the proposed TM3DOD method achieved significant improvements in 3D detection performance compared with the baseline methods. Additionally, our method achieved comparable performance to state-of-the-art approaches.

8.
Sensors (Basel) ; 24(3)2024 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-38339632

RESUMO

Point cloud densification is essential for understanding the 3D environment. It provides crucial structural and semantic information for downstream tasks such as 3D object detection and tracking. However, existing registration-based methods struggle with dynamic targets due to the incompleteness and deformation of point clouds. To address this challenge, we propose a Kalman-based scene flow estimation method for point cloud densification and 3D object detection in dynamic scenes. Our method effectively tackles the issue of localization errors in scene flow estimation and enhances the accuracy and precision of shape completion. Specifically, we introduce a Kalman filter to correct the dynamic target's position while estimating long sequence scene flow. This approach helps eliminate the cumulative localization error during the scene flow estimation process. Extended experiments on the KITTI 3D tracking dataset demonstrate that our method significantly improves the performance of LiDAR-only detectors, achieving superior results compared to the baselines.

9.
Sensors (Basel) ; 24(7)2024 Mar 28.
Artigo em Inglês | MEDLINE | ID: mdl-38610381

RESUMO

Cooperative perception in the field of connected autonomous vehicles (CAVs) aims to overcome the inherent limitations of single-vehicle perception systems, including long-range occlusion, low resolution, and susceptibility to weather interference. In this regard, we propose a high-precision 3D object detection V2V cooperative perception algorithm. The algorithm utilizes a voxel grid-based statistical filter to effectively denoise point cloud data to obtain clean and reliable data. In addition, we design a feature extraction network based on the fusion of voxels and PointPillars and encode it to generate BEV features, which solves the spatial feature interaction problem lacking in the PointPillars approach and enhances the semantic information of the extracted features. A maximum pooling technique is used to reduce the dimensionality and generate pseudoimages, thereby skipping complex 3D convolutional computation. To facilitate effective feature fusion, we design a feature level-based crossvehicle feature fusion module. Experimental validation is conducted using the OPV2V dataset to assess vehicle coperception performance and compare it with existing mainstream coperception algorithms. Ablation experiments are also carried out to confirm the contributions of this approach. Experimental results show that our architecture achieves lightweighting with a higher average precision (AP) than other existing models.

10.
Sensors (Basel) ; 24(9)2024 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-38732995

RESUMO

In the realm of computer vision, the integration of advanced techniques into the pre-processing of RGB-D camera inputs poses a significant challenge, given the inherent complexities arising from diverse environmental conditions and varying object appearances. Therefore, this paper introduces FusionVision, an exhaustive pipeline adapted for the robust 3D segmentation of objects in RGB-D imagery. Traditional computer vision systems face limitations in simultaneously capturing precise object boundaries and achieving high-precision object detection on depth maps, as they are mainly proposed for RGB cameras. To address this challenge, FusionVision adopts an integrated approach by merging state-of-the-art object detection techniques, with advanced instance segmentation methods. The integration of these components enables a holistic (unified analysis of information obtained from both color RGB and depth D channels) interpretation of RGB-D data, facilitating the extraction of comprehensive and accurate object information in order to improve post-processes such as object 6D pose estimation, Simultanious Localization and Mapping (SLAM) operations, accurate 3D dataset extraction, etc. The proposed FusionVision pipeline employs YOLO for identifying objects within the RGB image domain. Subsequently, FastSAM, an innovative semantic segmentation model, is applied to delineate object boundaries, yielding refined segmentation masks. The synergy between these components and their integration into 3D scene understanding ensures a cohesive fusion of object detection and segmentation, enhancing overall precision in 3D object segmentation.

11.
Sensors (Basel) ; 24(1)2023 Dec 20.
Artigo em Inglês | MEDLINE | ID: mdl-38202888

RESUMO

With the increasing demand from unmanned driving and robotics, more attention has been paid to point-cloud-based 3D object accurate detection technology. However, due to the sparseness and irregularity of the point cloud, the most critical problem is how to utilize the relevant features more efficiently. In this paper, we proposed a point-based object detection enhancement network to improve the detection accuracy in the 3D scenes understanding based on the distance features. Firstly, the distance features are extracted from the raw point sets and fused with the raw features regarding reflectivity of the point cloud to maximize the use of information in the point cloud. Secondly, we enhanced the distance features and raw features, which we collectively refer to as self-features of the key points, in set abstraction (SA) layers with the self-attention mechanism, so that the foreground points can be better distinguished from the background points. Finally, we revised the group aggregation module in SA layers to enhance the feature aggregation effect of key points. We conducted experiments on the KITTI dataset and nuScenes dataset and the results show that the enhancement method proposed in this paper has excellent performance.

12.
Neural Netw ; 179: 106623, 2024 Aug 10.
Artigo em Inglês | MEDLINE | ID: mdl-39154419

RESUMO

LiDAR point clouds can effectively depict the motion and posture of objects in three-dimensional space. Many studies accomplish the 3D object detection by voxelizing point clouds. However, in autonomous driving scenarios, the sparsity and hollowness of point clouds create some difficulties for voxel-based methods. The sparsity of point clouds makes it challenging to describe the geometric features of objects. The hollowness of point clouds poses difficulties for the aggregation of 3D features. We propose a two-stage 3D object detection framework, called MS23D. (1) We propose a method using voxel feature points from multi-branch to construct the 3D feature layer. Using voxel feature points from different branches, we construct a relatively compact 3D feature layer with rich semantic features. Additionally, we propose a distance-weighted sampling method, reducing the loss of foreground points caused by downsampling and allowing the 3D feature layer to retain more foreground points. (2) In response to the hollowness of point clouds, we predict the offsets between deep-level feature points and the object's centroid, making them as close as possible to the object's centroid. This enables the aggregation of these feature points with abundant semantic features. For feature points from shallow-level, we retain them on the object's surface to describe the geometric features of the object. To validate our approach, we evaluated its effectiveness on both the KITTI and ONCE datasets.

13.
Neural Netw ; 179: 106535, 2024 Jul 14.
Artigo em Inglês | MEDLINE | ID: mdl-39047336

RESUMO

Multi-modal 3D object detection is instrumental in identifying and localizing objects within 3D space. It combines RGB images from cameras and point-clouds data from lidar sensors, serving as a fundamental technology for autonomous driving applications. Current methods commonly employ simplistic element-wise additions or multiplications to aggregate multi-modal features extracted from point-clouds and images. While these methods enhance detection accuracy, the utilization of basic operations presents challenges in effectively balancing the significance between modalities. This can potentially introduce noise and irrelevant information during the feature aggregation process. Additionally, the multi-level features extracted from images display imbalances in receptive fields. To tackle the aforementioned challenges, we propose two innovative networks: a cross-modality balance network (CMN) and a cross-scale balance network (CSN). CMN incorporates cross-modality attention mechanisms and introduces an auxiliary 2D detection head to balance the significance of both modalities. Meanwhile, CSN leverages cross-scale attention mechanisms to mitigate the gap in receptive fields between different image levels. Additionally, we introduce a novel Local with Global Voxel Attention Encoder (LGVAE) designed to capture global semantics by extracting more comprehensive point-level information into voxel-level features. We perform comprehensive experiments on three challenging public benchmarks: KITTI, Dense and nuScenes. The results consistently demonstrate improvements across multiple 3D object detection frameworks, affirming the effectiveness and versatility of our proposed method. Remarkably, our approach achieves a substantial absolute gain of 3.1% over the baseline MVXNet on the challenging Hard set of the Dense test set.

14.
Front Robot AI ; 11: 1212070, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38510560

RESUMO

This survey reviews advances in 3D object detection approaches for autonomous driving. A brief introduction to 2D object detection is first discussed and drawbacks of the existing methodologies are identified for highly dynamic environments. Subsequently, this paper reviews the state-of-the-art 3D object detection techniques that utilizes monocular and stereo vision for reliable detection in urban settings. Based on depth inference basis, learning schemes, and internal representation, this work presents a method taxonomy of three classes: model-based and geometrically constrained approaches, end-to-end learning methodologies, and hybrid methods. There is highlighted segment for current trend of multi-view detectors as end-to-end methods due to their boosted robustness. Detectors from the last two kinds were specially selected to exploit the autonomous driving context in terms of geometry, scene content and instances distribution. To prove the effectiveness of each method, 3D object detection datasets for autonomous vehicles are described with their unique features, e. g., varying weather conditions, multi-modality, multi camera perspective and their respective metrics associated to different difficulty categories. In addition, we included multi-modal visual datasets, i. e., V2X that may tackle the problems of single-view occlusion. Finally, the current research trends in object detection are summarized, followed by a discussion on possible scope for future research in this domain.

15.
Biomimetics (Basel) ; 9(7)2024 Jul 06.
Artigo em Inglês | MEDLINE | ID: mdl-39056853

RESUMO

In complex traffic environments, 3D target tracking and detection are often occluded by various stationary and moving objects. When the target is occluded, its apparent characteristics change, resulting in a decrease in the accuracy of tracking and detection. In order to solve this problem, we propose to learn the vehicle behavior from the driving data, predict and calibrate the vehicle trajectory, and finally use the artificial fish swarm algorithm to optimize the tracking results. The experiments show that compared with the CenterTrack method, the proposed method improves the key indicators of MOTA (Multi-Object Tracking Accuracy) in 3D object detection and tracking on the nuScenes dataset, and the frame rate is 26 fps.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA