Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(13)2024 Jun 24.
Artigo em Inglês | MEDLINE | ID: mdl-39000880

RESUMO

Vehicle-infrastructure cooperative perception is becoming increasingly crucial for autonomous driving systems and involves leveraging infrastructure's broader spatial perspective and computational resources. This paper introduces CoFormerNet, which is a novel framework for improving cooperative perception. CoFormerNet employs a consistent structure for both vehicle and infrastructure branches, integrating the temporal aggregation module and spatial-modulated cross-attention to fuse intermediate features at two distinct stages. This design effectively handles communication delays and spatial misalignment. Experimental results using the DAIR-V2X and V2XSet datasets demonstrated that CoFormerNet significantly outperformed the existing methods, achieving state-of-the-art performance in 3D object detection.

2.
Sensors (Basel) ; 24(6)2024 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-38544267

RESUMO

Autonomous driving recognition technology that can quickly and accurately recognize even small objects must be developed in high-speed situations. This study proposes an object point extraction method using rule-based LiDAR ring data and edge triggers to increase both speed and performance. The LiDAR's ring information is interpreted as a digital pulse to remove the ground, and object points are extracted by detecting discontinuous edges of the z value aligned with the ring ID and azimuth. A bounding box was simply created using DBSCAN and PCA to check recognition performance from the extracted object points. Verification of the results of removing the ground and extracting points through Ring Edge was conducted using SemanticKITTI and Waymo Open Dataset, and it was confirmed that both F1 scores were superior to RANSAC. In addition, extracting bounding boxes of objects also showed higher PDR index performance when verified in open datasets, virtual driving environments, and actual driving environments.

3.
Sensors (Basel) ; 24(3)2024 Jan 26.
Artigo em Inglês | MEDLINE | ID: mdl-38339539

RESUMO

Recently, new semantic segmentation and object detection methods have been proposed for the direct processing of three-dimensional (3D) LiDAR sensor point clouds. LiDAR can produce highly accurate and detailed 3D maps of natural and man-made environments and is used for sensing in many contexts due to its ability to capture more information, its robustness to dynamic changes in the environment compared to an RGB camera, and its cost, which has decreased in recent years and which is an important factor for many application scenarios. The challenge with high-resolution 3D LiDAR sensors is that they can output large amounts of 3D data with up to a few million points per second, which is difficult to process in real time when applying complex algorithms and models for efficient semantic segmentation. Most existing approaches are either only suitable for relatively small point clouds or rely on computationally intensive sampling techniques to reduce their size. As a result, most of these methods do not work in real time in realistic field robotics application scenarios, making them unsuitable for practical applications. Systematic point selection is a possible solution to reduce the amount of data to be processed. Although our approach is memory and computationally efficient, it selects only a small subset of points, which may result in important features being missed. To address this problem, our proposed systematic sampling method called SyS3DS (Systematic Sampling for 3D Semantic Segmentation) incorporates a technique in which the local neighbours of each point are retained to preserve geometric details. SyS3DS is based on the graph colouring algorithm and ensures that the selected points are non-adjacent in order to obtain a subset of points that are representative of the 3D points in the scene. To take advantage of the ensemble learning method, we pass a different subset of nodes for each epoch. This leverages a new technique called auto-ensemble, where ensemble learning is proposed as a collection of different learning models instead of tuning different hyperparameters individually during training and validation. SyS3DS has been shown to process up to 1 million points in a single pass. It outperforms the state of the art in efficient semantic segmentation on large datasets such as Semantic3D. We also present a preliminary study on the validity of the performance of LiDAR-only data, i.e., intensity values from LiDAR sensors without RGB values for semi-autonomous robot perception.

4.
Accid Anal Prev ; 195: 107422, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38064940

RESUMO

Safety assessment is an active research subject for autonomous vehicles (AVs) that have emerged as a new mode of mobility. In particular, scenario-based safety assessments have garnered significant attention. AVs can be tested on how they safely avoid hypothetical situations leading to accidents. However, scenarios written by humans based on their expert knowledge and experience may only partially reflect real-world situations. Instead, we are keen on a different technique of extracting statistically significant and more detailed scenarios from sensor data captured during the critical moments when AVs become vulnerable to potential accidents. Specifically, we first render the three-dimensional space around an AV with fixed-sized voxels. Then, we modeled the aggregate kinetics of the objects in each voxel detected by 3D-LiDAR sensors mounted on real test AVs. The Vision Transformer we used to model the kinetics helped us quickly pinpoint critical voxels containing objects that threatened the AV's safety. We traced the trajectory of the critical voxels on a visual attention map to describe in detail how AVs become vulnerable to accidents according to the logical scenario format defined by the PEGASUS Project. We tested our novel method with 250 h of 3D-LiDAR recordings capturing critical moments. We devised an inference model that detected critical situations with an F1-score of 98.26%. For each type of scenario, our model consistently identified the critical objects and their tendency to influence AVs. Given the evaluation results, we can ensure that our data-driven approach yields an AV safety assessment scenario with high representativeness, coverage, expansion, and computational feasibility.


Assuntos
Acidentes de Trânsito , Condução de Veículo , Humanos , Acidentes de Trânsito/prevenção & controle , Aprendizagem , Veículos Autônomos , Cinética , Conhecimento , Segurança
5.
Sensors (Basel) ; 23(11)2023 May 29.
Artigo em Inglês | MEDLINE | ID: mdl-37299894

RESUMO

In tunnel lining construction, the traditional manual wet spraying operation is labor-intensive and can be challenging to ensure consistent quality. To address this, this study proposes a LiDAR-based method for sensing the thickness of tunnel wet spray, which aims to improve efficiency and quality. The proposed method utilizes an adaptive point cloud standardization processing algorithm to address differing point cloud postures and missing data, and the segmented Lamé curve is employed to fit the tunnel design axis using the Gauss-Newton iteration method. This establishes a mathematical model of the tunnel section and enables the analysis and perception of the thickness of the tunnel to be wet sprayed through comparison with the actual inner contour line and the design line of the tunnel. Experimental results show that the proposed method is effective in sensing the thickness of tunnel wet spray, with important implications for promoting intelligent wet spraying operations, improving wet spraying quality, and reducing labor costs in tunnel lining construction.


Assuntos
Algoritmos , Trabalho de Parto , Gravidez , Feminino , Humanos , Computação em Nuvem , Inteligência , Lasers
6.
Sensors (Basel) ; 23(6)2023 Mar 18.
Artigo em Inglês | MEDLINE | ID: mdl-36991950

RESUMO

This paper presents the use of deep Reinforcement Learning (RL) for autonomous navigation of an Unmanned Ground Vehicle (UGV) with an onboard three-dimensional (3D) Light Detection and Ranging (LiDAR) sensor in off-road environments. For training, both the robotic simulator Gazebo and the Curriculum Learning paradigm are applied. Furthermore, an Actor-Critic Neural Network (NN) scheme is chosen with a suitable state and a custom reward function. To employ the 3D LiDAR data as part of the input state of the NNs, a virtual two-dimensional (2D) traversability scanner is developed. The resulting Actor NN has been successfully tested in both real and simulated experiments and favorably compared with a previous reactive navigation approach on the same UGV.

7.
Sensors (Basel) ; 23(5)2023 Feb 22.
Artigo em Inglês | MEDLINE | ID: mdl-36904644

RESUMO

A perception module is a vital component of a modern robotic system. Vision, radar, thermal, and LiDAR are the most common choices of sensors for environmental awareness. Relying on singular sources of information is prone to be affected by specific environmental conditions (e.g., visual cameras are affected by glary or dark environments). Thus, relying on different sensors is an essential step to introduce robustness against various environmental conditions. Hence, a perception system with sensor fusion capabilities produces the desired redundant and reliable awareness critical for real-world systems. This paper proposes a novel early fusion module that is reliable against individual cases of sensor failure when detecting an offshore maritime platform for UAV landing. The model explores the early fusion of a still unexplored combination of visual, infrared, and LiDAR modalities. The contribution is described by suggesting a simple methodology that intends to facilitate the training and inference of a lightweight state-of-the-art object detector. The early fusion based detector achieves solid detection recalls up to 99% for all cases of sensor failure and extreme weather conditions such as glary, dark, and foggy scenarios in fair real-time inference duration below 6 ms.

8.
Sensors (Basel) ; 22(23)2022 Nov 26.
Artigo em Inglês | MEDLINE | ID: mdl-36501907

RESUMO

Rapid and accurate reconnaissance in the event of radiological and nuclear (RN) incidents or attacks is vital to launch an appropriate response. This need is made stronger by the increasing threat of RN attacks on soft targets and critical infrastructure in densely populated areas. In such an event, even small radioactive sources can cause major disruption to the general population. In this work, we present a real-time radiological source localization method based on an optimization problem considering a background and radiation model. Supported by extensive real-world experiments, we show that an airborne system using this method is capable for reliably locating category 3-4 radioactive sources according to IAEA safety standards in real time from altitudes up to 150 m. A sensor bundle including a LiDAR sensor, a Gamma probe as well as a communication module was mounted on a UAV that served as a carrier platform. The method was evaluated on a comprehensive set of test flights, including 28 flight scenarios over 316 min using three different radiation sources. All additional gamma sources were correctly detected, multiple sources were detected if they were sufficiently separated from each other, with the distance between the true source position and the estimated source averaging 17.1 m. We also discuss the limitations of the system in terms of detection limit and source separation.

9.
Sensors (Basel) ; 22(18)2022 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-36146325

RESUMO

In this paper we present a new way to compute the odometry of a 3D lidar in real-time. Due to the significant relation between these sensors and the rapidly increasing sector of autonomous vehicles, 3D lidars have improved in recent years, with modern models producing data in the form of range images. We take advantage of this ordered format to efficiently estimate the trajectory of the sensor as it moves in 3D space. The proposed method creates and leverages a flatness image in order to exploit the information found in flat surfaces of the scene. This allows for an efficient selection of planar patches from a first range image. Then, from a second image, keypoints related to said patches are extracted. This way, our proposal computes the ego-motion by imposing a coplanarity constraint between pairs whose correspondences are iteratively updated. The proposed algorithm is tested and compared with state-of-the-art ICP algorithms. Experiments show that our proposal, running on a single thread, can run 5× faster than a multi-threaded implementation of GICP, while providing a more accurate localization. A second version of the algorithm is also presented, which reduces the drift even further while needing less than half of the computation time of GICP. Both configurations of the algorithm run at frame rates common for most 3D lidars, 10 and 20 Hz on a standard CPU.

10.
Front Plant Sci ; 13: 939733, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35923876

RESUMO

Spray drift is an inescapable consequence of agricultural plant protection operation, which has always been one of the major concerns in the spray application industry. Spray drift evaluation is essential to provide a basis for the rational selection of spray technique and working surroundings. Nowadays, conventional sampling methods with passive collectors used in drift evaluation are complex, time-consuming, and labor-intensive. The aim of this paper is to present a method to evaluate spray drift based on 3D LiDAR sensor and to test the feasibility of alternatives to passive collectors. Firstly, a drift measurement algorithm was established based on point clouds data of 3D LiDAR. Wind tunnel tests included three types of agricultural nozzles, three pressure settings, and five wind speed settings were conducted. LiDAR sensor and passive collectors (polyethylene lines) were placed downwind from the nozzle to measure drift droplets in a vertical plane. Drift deposition volume on each line and the number of LiDAR droplet points in the corresponding height of the collecting line were calculated, and the influencing factors of this new method were analyzed. The results show that 3D LiDAR measurements provide a rich spatial information, such as the height and width of the drift droplet distribution, etc. High coefficients of determination (R 2 > 0.75) were observed for drift points measured by 3D LiDAR compared to the deposition volume captured by passive collectors, and the anti-drift IDK12002 nozzle at 0.2 MPa spray pressure has the largest R 2 value, which is 0.9583. Drift assessment with 3D LiDAR is sensitive to droplet density or drift mass in space and nozzle initial droplet spectrum; in general, larger droplet density or drift mass and smaller droplet size are not conducive to LiDAR detection, while the appropriate threshold range still needs further study. This study demonstrates that 3D LiDAR has the potential to be used as an alternative tool for rapid assessment of spray drift.

11.
Sensors (Basel) ; 22(15)2022 Jul 26.
Artigo em Inglês | MEDLINE | ID: mdl-35898100

RESUMO

This paper presents a new synthetic dataset obtained from Gazebo simulations of an Unmanned Ground Vehicle (UGV) moving on different natural environments. To this end, a Husky mobile robot equipped with a tridimensional (3D) Light Detection and Ranging (LiDAR) sensor, a stereo camera, a Global Navigation Satellite System (GNSS) receiver, an Inertial Measurement Unit (IMU) and wheel tachometers has followed several paths using the Robot Operating System (ROS). Both points from LiDAR scans and pixels from camera images, have been automatically labeled into their corresponding object class. For this purpose, unique reflectivity values and flat colors have been assigned to each object present in the modeled environments. As a result, a public dataset, which also includes 3D pose ground-truth, is provided as ROS bag files and as human-readable data. Potential applications include supervised learning and benchmarking for UGV navigation on natural environments. Moreover, to allow researchers to easily modify the dataset or to directly use the simulations, the required code has also been released.


Assuntos
Robótica , Benchmarking , Meio Ambiente , Humanos , Espécies Reativas de Oxigênio , Software
12.
Sensors (Basel) ; 22(6)2022 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-35336392

RESUMO

In recent years, multi-sensor fusion technology has made enormous progress in 3D reconstruction, surveying and mapping, autonomous driving, and other related fields, and extrinsic calibration is a necessary condition for multi-sensor fusion applications. This paper proposes a 3D LIDAR-to-camera automatic calibration framework based on graph optimization. The system can automatically identify the position of the pattern and build a set of virtual feature point clouds, and can simultaneously complete the calibration of the LIDAR and multiple cameras. To test this framework, a multi-sensor system is formed using a mobile robot equipped with LIDAR, monocular and binocular cameras, and the pairwise calibration of LIDAR with two cameras is evaluated quantitatively and qualitatively. The results show that this method can produce more accurate calibration results than the state-of-the-art method. The average error on the camera normalization plane is 0.161 mm, which outperforms existing calibration methods. Due to the introduction of graph optimization, the original point cloud is also optimized while optimizing the external parameters between the sensors, which can effectively correct the errors caused during data collection, so it is also robust to bad data.

13.
Sensors (Basel) ; 22(5)2022 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-35271060

RESUMO

There are numerous global navigation satellite system-denied regions in urban areas, where the localization of autonomous driving remains a challenge. To address this problem, a high-resolution light detection and ranging (LiDAR) sensor was recently developed. Various methods have been proposed to improve the accuracy of localization using precise distance measurements derived from LiDAR sensors. This study proposes an algorithm to accelerate the computational speed of LiDAR localization while maintaining the original accuracy of lightweight map-matching algorithms. To this end, first, a point cloud map was transformed into a normal distribution (ND) map. During this process, vector-based normal distribution transform, suitable for graphics processing unit (GPU) parallel processing, was used. In this study, we introduce an algorithm that enabled GPU parallel processing of an existing ND map-matching process. The performance of the proposed algorithm was verified using an open dataset and simulations. To verify the practical performance of the proposed algorithm, the real-time serial and parallel processing performances of the localization were compared using high-performance and embedded computers, respectively. The distance root-mean-square error and computational time of the proposed algorithm were compared. The algorithm increased the computational speed of the embedded computer almost 100-fold while maintaining high localization precision.

14.
Front Robot AI ; 9: 832165, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35155589

RESUMO

Developing ground robots for agriculture is a demanding task. Robots should be capable of performing tasks like spraying, harvesting, or monitoring. However, the absence of structure in the agricultural scenes challenges the implementation of localization and mapping algorithms. Thus, the research and development of localization techniques are essential to boost agricultural robotics. To address this issue, we propose an algorithm called VineSLAM suitable for localization and mapping in agriculture. This approach uses both point- and semiplane-features extracted from 3D LiDAR data to map the environment and localize the robot using a novel Particle Filter that considers both feature modalities. The numeric stability of the algorithm was tested using simulated data. The proposed methodology proved to be suitable to localize a robot using only three orthogonal semiplanes. Moreover, the entire VineSLAM pipeline was compared against a state-of-the-art approach considering three real-world experiments in a woody-crop vineyard. Results show that our approach can localize the robot with precision even in long and symmetric vineyard corridors outperforming the state-of-the-art algorithm in this context.

15.
Data Brief ; 40: 107667, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34977287

RESUMO

This paper presents our latest extension of the Brno Urban Dataset (BUD), the Winter Extension (WE). The dataset contains data from commonly used sensors in the automotive industry, like four RGB and single IR cameras, three 3D LiDARs, differential RTK GNSS receiver with heading estimation, the IMU and FMCW radar. Data from all sensors are precisely timestamped for future offline interpretation and data fusion. The most significant gain of the dataset is the focus on the winter conditions in snow-covered environments. Only a few public datasets deal with these kinds of conditions. We recorded the dataset during February 2021 in Brno, Czechia, when fresh snow covers the entire city and the surrounding countryside. The dataset contains situations from the city center, suburbs, highways as well as the countryside. Overall, the new extension adds three hours of real-life traffic situations from the mid-size city to the existing 10 h of original records. Additionally, we provide the precalculated YOLO neural network object detection annotations for all five cameras for the entire old data and the new ones. The dataset is suitable for developing mapping and navigation algorithms as well as the collision and object detection pipelines. The entire dataset is available as open-source under the MIT license.

16.
Sensors (Basel) ; 21(20)2021 Oct 13.
Artigo em Inglês | MEDLINE | ID: mdl-34695994

RESUMO

This article aims at demonstrating the feasibility of modern deep learning techniques for the real-time detection of non-stationary objects in point clouds obtained from 3-D light detecting and ranging (LiDAR) sensors. The motion segmentation task is considered in the application context of automotive Simultaneous Localization and Mapping (SLAM), where we often need to distinguish between the static parts of the environment with respect to which we localize the vehicle, and non-stationary objects that should not be included in the map for localization. Non-stationary objects do not provide repeatable readouts, because they can be in motion, like vehicles and pedestrians, or because they do not have a rigid, stable surface, like trees and lawns. The proposed approach exploits images synthesized from the received intensity data yielded by the modern LiDARs along with the usual range measurements. We demonstrate that non-stationary objects can be detected using neural network models trained with 2-D grayscale images in the supervised or unsupervised training process. This concept makes it possible to alleviate the lack of large datasets of 3-D laser scans with point-wise annotations for non-stationary objects. The point clouds are filtered using the corresponding intensity images with labeled pixels. Finally, we demonstrate that the detection of non-stationary objects using our approach improves the localization results and map consistency in a laser-based SLAM system.


Assuntos
Lasers , Redes Neurais de Computação , Movimento (Física)
17.
Sensors (Basel) ; 21(10)2021 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-34063368

RESUMO

Although visual SLAM (simultaneous localization and mapping) methods obtain very accurate results using optimization of residual errors defined with respect to the matching features, the SLAM systems based on 3-D laser (LiDAR) data commonly employ variants of the iterative closest points algorithm and raw point clouds as the map representation. However, it is possible to extract from point clouds features that are more spatially extended and more meaningful than points: line segments and/or planar patches. In particular, such features provide a natural way to represent human-made environments, such as urban and mixed indoor/outdoor scenes. In this paper, we perform an analysis of the advantages of a LiDAR-based SLAM that employs high-level geometric features in large-scale urban environments. We present a new approach to the LiDAR SLAM that uses planar patches and line segments for map representation and employs factor graph optimization typical to state-of-the-art visual SLAM for the final map and trajectory optimization. The new map structure and matching of features make it possible to implement in our system an efficient loop closure method, which exploits learned descriptors for place recognition and factor graph for optimization. With these improvements, the overall software structure is based on the proven LOAM concept to ensure real-time operation. A series of experiments were performed to compare the proposed solution to the open-source LOAM, considering different approaches to loop closure computation. The results are compared using standard metrics of trajectory accuracy, focusing on the final quality of the estimated trajectory and the consistency of the environment map. With some well-discussed reservations, our results demonstrate the gains due to using the high-level features in the full-optimization approach in the large-scale LiDAR SLAM.

18.
Sensors (Basel) ; 20(10)2020 May 16.
Artigo em Inglês | MEDLINE | ID: mdl-32429427

RESUMO

This paper details a new extrinsic calibration method for scanning laser rangefinder that is precisely focused on the geometrical ground plane-based estimation. This method is also efficient in the challenging experimental configuration of a high angle of inclination of the LiDAR. In this configuration, the calibration of the LiDAR sensor is a key problem that can be be found in various domains and in particular to guarantee the efficiency of ground surface object detection. The proposed extrinsic calibration method can be summarized by the following procedure steps: fitting ground plane, extrinsic parameters estimation (3D orientation angles and altitude), and extrinsic parameters optimization. Finally, the results are presented in terms of precision and robustness against the variation of LiDAR's orientation and range accuracy, respectively, showing the stability and the accuracy of the proposed extrinsic calibration method, which was validated through numerical simulation and real data to prove the method performance.

19.
Sensors (Basel) ; 20(7)2020 Mar 26.
Artigo em Inglês | MEDLINE | ID: mdl-32225085

RESUMO

Simultaneous localization and mapping have become a basic requirement for most automatic moving robots. However, the LiDAR scan suffers from skewing caused by high-acceleration motion that reduces the precision in the latter mapping or classification process. In this study, we improve the quality of mapping results through a de-skewing LiDAR scan. By integrating high-sampling frequency IMU (inertial measurement unit) measurements and establishing a motion equation for time, we can get the pose of every point in this scan's frame. Then, all points in this scan are corrected and transformed into the frame of the first point. We expand the scope of optimization range from the current scan to a local range of point clouds that not only considers the motion of LiDAR but also takes advantage of the neighboring LiDAR scans. Finally, we validate the performance of our algorithm in indoor and outdoor experiments to compare the mapping results before and after de-skewing. Experimental results show that our method smooths the scan skewing on each channel and improves the mapping accuracy.

20.
Sensors (Basel) ; 20(4)2020 Feb 19.
Artigo em Inglês | MEDLINE | ID: mdl-32092995

RESUMO

Aiming at the problems of feature point calibration method of 3D light detection and ranging (LiDAR) and camera calibration that are calibration boards in various forms, incomplete information extraction methods and large calibration errors, a novel calibration board with local gradient depth information and main plane square corner information (BWDC) was designed. In addition, the "three-step fitting interpolation method" was proposed to select feature points and obtain the corresponding coordinates of feature points in the LiDAR coordinate system and camera pixel coordinate system based on BWDC. Finally, calibration experiments were carried out, and the calibration results were verified by methods such as incremental verification and reprojection error comparison. The calibration results show that using BWDC and the "three-step fitting interpolation method" can solve quite accurate coordinate transformation matrix and intrinsic and external parameters of sensors, which dynamically change within 0.2% in the repeatable experiments. The difference between the experimental value and the actual value in the incremental verification experiment is about 0.5%. The average reprojection error is 1.8312 pixels, and the value changes at different distances do not exceed 0.1 pixels, which also show that the calibration method is accurate and stable.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA