Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Sensors (Basel) ; 23(24)2023 Dec 17.
Artigo em Inglês | MEDLINE | ID: mdl-38139722

RESUMO

Environmental perception plays a fundamental role in decision-making and is crucial for ensuring the safety of autonomous driving. A pressing challenge is the online evaluation of perception uncertainty, a crucial step towards ensuring the safety and the industrialization of autonomous driving. High-definition maps offer precise information about static elements on the road, along with their topological relationships. As a result, the map can provide valuable prior information for assessing the uncertainty associated with static elements. In this paper, a method for evaluating perception uncertainty online, encompassing both static and dynamic elements, is introduced based on the high-definition map. The proposed method is as follows: Firstly, the uncertainty of static elements in perception, including the uncertainty of their existence and spatial information, was assessed based on the spatial and topological features of the static environmental elements; secondly, an online assessment model for the uncertainty of dynamic elements in perception was constructed. The online evaluation of the static element uncertainty was utilized to infer the dynamic element uncertainty, and then a model for recognizing the driving scenario and weather conditions was constructed to identify the triggering factors of uncertainty in real-time perception during autonomous driving operations, which can further optimize the online assessment model for perception uncertainty. The verification results on the nuScenes dataset show that our uncertainty assessment method based on a high-definition map effectively evaluates the real-time perception results' performance.

2.
Sensors (Basel) ; 23(5)2023 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-36905068

RESUMO

Deep neural network algorithms have achieved impressive performance in object detection. Real-time evaluation of perception uncertainty from deep neural network algorithms is indispensable for safe driving in autonomous vehicles. More research is required to determine how to assess the effectiveness and uncertainty of perception findings in real-time.This paper proposes a novel real-time evaluation method combining multi-source perception fusion and deep ensemble. The effectiveness of single-frame perception results is evaluated in real-time. Then, the spatial uncertainty of the detected objects and influencing factors are analyzed. Finally, the accuracy of spatial uncertainty is validated with the ground truth in the KITTI dataset. The research results show that the evaluation of perception effectiveness can reach 92% accuracy, and a positive correlation with the ground truth is found for both the uncertainty and the error. The spatial uncertainty is related to the distance and occlusion degree of detected objects.

3.
Sensors (Basel) ; 20(24)2020 Dec 18.
Artigo em Inglês | MEDLINE | ID: mdl-33353016

RESUMO

With the rapid development of automated vehicles (AVs), more and more demands are proposed towards environmental perception. Among the commonly used sensors, MMW radar plays an important role due to its low cost, adaptability In different weather, and motion detection capability. Radar can provide different data types to satisfy requirements for various levels of autonomous driving. The objective of this study is to present an overview of the state-of-the-art radar-based technologies applied In AVs. Although several published research papers focus on MMW Radars for intelligent vehicles, no general survey on deep learning applied In radar data for autonomous vehicles exists. Therefore, we try to provide related survey In this paper. First, we introduce models and representations from millimeter-wave (MMW) radar data. Secondly, we present radar-based applications used on AVs. For low-level automated driving, radar data have been widely used In advanced driving-assistance systems (ADAS). For high-level automated driving, radar data is used In object detection, object tracking, motion prediction, and self-localization. Finally, we discuss the remaining challenges and future development direction of related studies.

4.
Sensors (Basel) ; 20(7)2020 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-32230965

RESUMO

Real-time vehicle localization (i.e., position and orientation estimation in the world coordinate system) with high accuracy is the fundamental function of an intelligent vehicle (IV) system. In the process of commercialization of IVs, many car manufacturers attempt to avoid high-cost sensor systems (e.g., RTK GNSS and LiDAR) in favor of low-cost optical sensors such as cameras. The same cost-saving strategy also gives rise to an increasing number of vehicles equipped with High Definition (HD) maps. Rooted upon these existing technologies, this article presents the concept of Monocular Localization with Vector HD Map (MLVHM), a novel camera-based map-matching method that efficiently aligns semantic-level geometric features in-camera acquired frames against the vector HD map in order to achieve high-precision vehicle absolute localization with minimal cost. The semantic features are delicately chosen for the ease of map vector alignment as well as for the resiliency against occlusion and fluctuation in illumination. The effective data association method in MLVHM serves as the basis for the camera position estimation by minimizing feature re-projection errors, and the frame-to-frame motion fusion is further introduced for reliable localization results. Experiments have shown that MLVHM can achieve high-precision vehicle localization with an RMSE of 24 cm with no cumulative error. In addition, we use low-cost on-board sensors and light-weight HD maps to achieve or even exceed the accuracy of existing map-matching algorithms.

5.
Sensors (Basel) ; 20(2)2020 Jan 08.
Artigo em Inglês | MEDLINE | ID: mdl-31936382

RESUMO

Occupancy grid is a popular environment model that is widely applied for autonomous navigation of mobile robots. This model encodes obstacle information into the grid cells as a reference of the space state. However, when navigating on roads, the planning module of an autonomous vehicle needs to have semantic understanding of the scene, especially concerning the accessibility of the driving space. This paper presents a grid-based evidential approach for modeling semantic road space by taking advantage of a prior map that contains lane-level information. Road rules are encoded in the grid for semantic understanding. Our approach focuses on dealing with the localization uncertainty, which is a key issue, while parsing information from the prior map. Readings from an exteroceptive sensor are as well integrated in the grid to provide real-time obstacle information. All the information is managed in an evidential framework based on Dempster-Shafer theory. Real road results are reported with qualitative evaluation and quantitative analysis of the constructed grids to show the performance and the behavior of the method for real-time application.

6.
Sensors (Basel) ; 19(22)2019 Nov 18.
Artigo em Inglês | MEDLINE | ID: mdl-31752221

RESUMO

Accurate road information is important for applications involving road maintenance, intelligent transportation, and road network updates. Mobile laser scanning (MLS) can effectively extract road information. However, accurately extracting road edges based on large-scale data for complex road conditions, including both structural and non-structural road types, remains difficult. In this study, a robust method to automatically extract structural and non-structural road edges based on a topological network of laser points between adjacent scan lines and auxiliary surfaces is proposed. The extraction of road and curb points was achieved mainly from the roughness of the extracted surface, without considering traditional thresholds (e.g., height jump, slope, and density). Five large-scale road datasets, containing different types of road curbs and complex road scenes, were used to evaluate the practicality, stability, and validity of the proposed method via qualitative and quantitative analyses. Measured values of the correctness, completeness, and quality of extracted road edges were over 95.5%, 91.7%, and 90.9%, respectively. These results confirm that the proposed method can extract road edges from large-scale MLS datasets without the need for auxiliary information on intensity, image, or geographic data. The proposed method is effective regardless of whether the road width is fixed, the road is regular, and the existence of pedestrians and vehicles. Most importantly, the proposed method provides a valuable solution for road edge extraction that is useful for road authorities when developing intelligent transportation systems, such as those required by self-driving vehicles.

7.
Sensors (Basel) ; 19(9)2019 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-31035458

RESUMO

Future intelligent transport systems depend on the accurate positioning of multiple targets in the road scene, including vehicles and all other moving or static elements. The existing self-positioning capability of individual vehicles remains insufficient. Also, bottlenecks in developing on-board perception systems stymie further improvements in the precision and integrity of positioning targets. Vehicle-to-everything (V2X) communication, which is fast becoming a standard component of intelligent and connected vehicles, renders new sources of information such as dynamically updated high-definition (HD) maps accessible. In this paper, we propose a unified theoretical framework for multiple-target positioning by fusing multi-source heterogeneous information from the on-board sensors and V2X technology of vehicles. Numerical and theoretical studies are conducted to evaluate the performance of the framework proposed. With a low-cost global navigation satellite system (GNSS) coupled with an initial navigation system (INS), on-board sensors, and a normally equipped HD map, the precision of multiple-target positioning attained can meet the requirements of high-level automated vehicles. Meanwhile, the integrity of target sensing is significantly improved by the sharing of sensor information and exploitation of map data. Furthermore, our framework is more adaptable to traffic scenarios when compared with state-of-the-art techniques.

8.
J Acoust Soc Am ; 140(3): 1739, 2016 09.
Artigo em Inglês | MEDLINE | ID: mdl-27914411

RESUMO

Identification and measurement of moving sound sources are the bases for vehicle noise control. Acoustic holography has been applied in successfully identifying the moving sound source since the 1990s. However, due to the high demand for the accuracy of holographic data, currently the maximum velocity achieved by acoustic holography is just above 100 km/h. The objective of this study was to establish a method based on the complete Morse acoustic model to restore the measured signal in high-speed situations, and to propose a far-field acoustic holography method applicable for high-speed moving sound sources. Simulated comparisons of the proposed far-field acoustic holography with complete Morse model, the acoustic holography with simplified Morse model and traditional delay-and-sum beamforming were conducted. Experiments with a high-speed train running at the speed of 278 km/h validated the proposed far-field acoustic holography. This study extended the applications of acoustic holography to high-speed situations and established the basis for quantitative measurements of far-field acoustic holography.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA