Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters











Database
Language
Publication year range
1.
PeerJ Comput Sci ; 10: e2209, 2024.
Article in English | MEDLINE | ID: mdl-39145222

ABSTRACT

Background: Autonomous driving is a growing research area that brings benefits in science, economy, and society. Although there are several studies in this area, currently there is no a fully autonomous vehicle, particularly, for off-road navigation. Autonomous vehicle (AV) navigation is a complex process based on application of multiple technologies and algorithms for data acquisition, management and understanding. Particularly, a self-driving assistance system supports key functionalities such as sensing and terrain perception, real time vehicle mapping and localization, path prediction and actuation, communication and safety measures, among others. Methods: In this work, an original approach for vehicle autonomous driving in off-road environments that combines semantic segmentation of video frames and subsequent real-time route planning is proposed. To check the relevance of the proposal, a modular framework for assistive driving in off-road scenarios oriented to resource-constrained devices has been designed. In the scene perception module, a deep neural network is used to segment Red-Green-Blue (RGB) images obtained from camera. The second traversability module fuses Light Detection And Ranging (LiDAR) point clouds with the results of segmentation to create a binary occupancy grid map to provide scene understanding during autonomous navigation. Finally, the last module, based on the Rapidly-exploring Random Tree (RRT) algorithm, predicts a path. The Freiburg Forest Dataset (FFD) and RELLIS-3D dataset were used to assess the performance of the proposed approach. The theoretical contributions of this article consist of the original approach for image semantic segmentation fitted to off-road driving scenarios, as well as adapting the shortest route searching A* and RRT algorithms to AV path planning. Results: The reported results are very promising and show several advantages compared to previously reported solutions. The segmentation precision achieves 85.9% for FFD and 79.5% for RELLIS-3D including the most frequent semantic classes. While compared to other approaches, the proposed approach is faster regarding computational time for path planning.

2.
Sensors (Basel) ; 24(13)2024 Jul 04.
Article in English | MEDLINE | ID: mdl-39001113

ABSTRACT

The development of intelligent transportation systems (ITS), vehicular ad hoc networks (VANETs), and autonomous driving (AD) has progressed rapidly in recent years, driven by artificial intelligence (AI), the internet of things (IoT), and their integration with dedicated short-range communications (DSRC) systems and fifth-generation (5G) networks. This has led to improved mobility conditions in different road propagation environments: urban, suburban, rural, and highway. The use of these communication technologies has enabled drivers and pedestrians to be more aware of the need to improve their behavior and decision making in adverse traffic conditions by sharing information from cameras, radars, and sensors widely deployed in vehicles and road infrastructure. However, wireless data transmission in VANETs is affected by the specific conditions of the propagation environment, weather, terrain, traffic density, and frequency bands used. In this paper, we characterize the path loss based on the extensive measurement campaign carrier out in vehicular environments at 700 MHz and 5.9 GHz under realistic road traffic conditions. From a linear dual-slope path loss propagation model, the results of the path loss exponents and the standard deviations of the shadowing are reported. This study focused on three different environments, i.e., urban with high traffic density (U-HD), urban with moderate/low traffic density (U-LD), and suburban (SU). The results presented here can be easily incorporated into VANET simulators to develop, evaluate, and validate new protocols and system architecture configurations under more realistic propagation conditions.

3.
Sensors (Basel) ; 24(7)2024 Mar 25.
Article in English | MEDLINE | ID: mdl-38610309

ABSTRACT

Autonomous driving navigation relies on diverse approaches, each with advantages and limitations depending on various factors. For HD maps, modular systems excel, while end-to-end methods dominate mapless scenarios. However, few leverage the strengths of both. This paper innovates by proposing a hybrid architecture that seamlessly integrates modular perception and control modules with data-driven path planning. This innovative design leverages the strengths of both approaches, enabling a clear understanding and debugging of individual components while simultaneously harnessing the learning power of end-to-end approaches. Our proposed architecture achieved first and second place in the 2023 CARLA Autonomous Driving Challenge's SENSORS and MAP tracks, respectively. These results demonstrate the architecture's effectiveness in both map-based and mapless navigation. We achieved a driving score of 41.56 and the highest route completion of 86.03 in the MAP track of the CARLA Challenge leaderboard 1, and driving scores of 35.36 and 1.23 in the CARLA Challenge SENSOR track with route completions of 85.01 and 9.55, for, respectively, leaderboard 1 and 2. The results of leaderboard 2 raised the hybrid architecture to the first position, winning the edition of the 2023 CARLA Autonomous Driving Competition.

4.
Sensors (Basel) ; 21(22)2021 Nov 18.
Article in English | MEDLINE | ID: mdl-34833741

ABSTRACT

Increasingly, robotic systems require a level of perception of the scenario to interact in real-time, but they also require specialized equipment such as sensors to reach high performance standards adequately. Therefore, it is essential to explore alternatives to reduce the costs for these systems. For example, a common problem attempted by intelligent robotic systems is path planning. This problem contains different subsystems such as perception, location, control, and planning, and demands a quick response time. Consequently, the design of the solutions is limited and requires specialized elements, increasing the cost and time development. Secondly, virtual reality is employed to train and evaluate algorithms, generating virtual data. For this reason, the virtual dataset can be connected with the authentic world through Generative Adversarial Networks (GANs), reducing time development and employing limited samples of the physical world. To describe the performance, metadata information details the properties of the agents in an environment. The metadata approach is tested with an augmented reality system and a micro aerial vehicle (MAV), where both systems are executed in an authentic environment and implemented in embedded devices. This development helps to guide alternatives to reduce resources and costs, but external factors limit these implementations, such as the illumination variation, because the system depends on only a conventional camera.


Subject(s)
Augmented Reality , Virtual Reality , Algorithms , Metadata
5.
Sensors (Basel) ; 17(10)2017 Oct 16.
Article in English | MEDLINE | ID: mdl-29035334

ABSTRACT

Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle's backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation.

SELECTION OF CITATIONS
SEARCH DETAIL