Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Bases de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(17)2024 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-39275752

RESUMO

Current state-of-the-art (SOTA) LiDAR-only detectors perform well for 3D object detection tasks, but point cloud data are typically sparse and lacks semantic information. Detailed semantic information obtained from camera images can be added with existing LiDAR-based detectors to create a robust 3D detection pipeline. With two different data types, a major challenge in developing multi-modal sensor fusion networks is to achieve effective data fusion while managing computational resources. With separate 2D and 3D feature extraction backbones, feature fusion can become more challenging as these modes generate different gradients, leading to gradient conflicts and suboptimal convergence during network optimization. To this end, we propose a 3D object detection method, Attention-Enabled Point Fusion (AEPF). AEPF uses images and voxelized point cloud data as inputs and estimates the 3D bounding boxes of object locations as outputs. An attention mechanism is introduced to an existing feature fusion strategy to improve 3D detection accuracy and two variants are proposed. These two variants, AEPF-Small and AEPF-Large, address different needs. AEPF-Small, with a lightweight attention module and fewer parameters, offers fast inference. AEPF-Large, with a more complex attention module and increased parameters, provides higher accuracy than baseline models. Experimental results on the KITTI validation set show that AEPF-Small maintains SOTA 3D detection accuracy while inferencing at higher speeds. AEPF-Large achieves mean average precision scores of 91.13, 79.06, and 76.15 for the car class's easy, medium, and hard targets, respectively, in the KITTI validation set. Results from ablation experiments are also presented to support the choice of model architecture.

2.
Sensors (Basel) ; 24(15)2024 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-39124065

RESUMO

External human-machine interfaces (eHMIs) serve as communication bridges between autonomous vehicles (AVs) and road users, ensuring that vehicles convey information clearly to those around them. While their potential has been explored in one-to-one contexts, the effectiveness of eHMIs in complex, real-world scenarios with multiple pedestrians remains relatively unexplored. Addressing this gap, our study provides an in-depth evaluation of how various eHMI displays affect pedestrian behavior. The research aimed to identify eHMI configurations that most effectively convey an AV's information, thereby enhancing pedestrian safety. Incorporating a mixed-methods approach, our study combined controlled outdoor experiments, involving 31 participants initially and 14 in a follow-up session, supplemented by an intercept survey involving 171 additional individuals. The participants were exposed to various eHMI displays in crossing scenarios to measure their impact on pedestrian perception and crossing behavior. Our findings reveal that the integration of a flashing green LED, robotic sign, and countdown timer constitutes the most effective eHMI display. This configuration notably increased pedestrians' willingness to cross and decreased their response times, indicating a strong preference and enhanced concept understanding. These findings lay the groundwork for future developments in AV technology and traffic safety, potentially guiding policymakers and manufacturers in creating safer urban environments.

3.
Sensors (Basel) ; 24(7)2024 Apr 05.
Artigo em Inglês | MEDLINE | ID: mdl-38610538

RESUMO

Safe autonomous vehicle (AV) operations depend on an accurate perception of the driving environment, which necessitates the use of a variety of sensors. Computational algorithms must then process all of this sensor data, which typically results in a high on-vehicle computational load. For example, existing lane markings are designed for human drivers, can fade over time, and can be contradictory in construction zones, which require specialized sensing and computational processing in an AV. But, this standard process can be avoided if the lane information is simply transmitted directly to the AV. High definition maps and road side units (RSUs) can be used for direct data transmission to the AV, but can be prohibitively expensive to establish and maintain. Additionally, to ensure robust and safe AV operations, more redundancy is beneficial. A cost-effective and passive solution is essential to address this need effectively. In this research, we propose a new infrastructure information source (IIS), chip-enabled raised pavement markers (CERPMs), which provide environmental data to the AV while also decreasing the AV compute load and the associated increase in vehicle energy use. CERPMs are installed in place of traditional ubiquitous raised pavement markers along road lane lines to transmit geospatial information along with the speed limit using long range wide area network (LoRaWAN) protocol directly to nearby vehicles. This information is then compared to the Mobileye commercial off-the-shelf traditional system that uses computer vision processing of lane markings. Our perception subsystem processes the raw data from both CEPRMs and Mobileye to generate a viable path required for a lane centering (LC) application. To evaluate the detection performance of both systems, we consider three test routes with varying conditions. Our results show that the Mobileye system failed to detect lane markings when the road curvature exceeded ±0.016 m-1. For the steep curvature test scenario, it could only detect lane markings on both sides of the road for just 6.7% of the given test route. On the other hand, the CERPMs transmit the programmed geospatial information to the perception subsystem on the vehicle to generate a reference trajectory required for vehicle control. The CERPMs successfully generated the reference trajectory for vehicle control in all test scenarios. Moreover, the CERPMs can be detected up to 340 m from the vehicle's position. Our overall conclusion is that CERPM technology is viable and that it has the potential to address the operational robustness and energy efficiency concerns plaguing the current generation of AVs.

4.
Sensors (Basel) ; 22(16)2022 Aug 11.
Artigo em Inglês | MEDLINE | ID: mdl-36015761

RESUMO

Commercialization of autonomous vehicle technology is a major goal of the automotive industry, thus research in this space is rapidly expanding across the world. However, despite this high level of research activity, literature detailing a straightforward and cost-effective approach to the development of an AV research platform is sparse. To address this need, we present the methodology and results regarding the AV instrumentation and controls of a 2019 Kia Niro which was developed for a local AV pilot program. This platform includes a drive-by-wire actuation kit, Aptiv electronically scanning radar, stereo camera, MobilEye computer vision system, LiDAR, inertial measurement unit, two global positioning system receivers to provide heading information, and an in-vehicle computer for driving environment perception and path planning. Robotic Operating System software is used as the system middleware between the instruments and the autonomous application algorithms. After selection, installation, and integration of these components, our results show successful utilization of all sensors, drive-by-wire functionality, a total additional power* consumption of 242.8 Watts (*Typical), and an overall cost of $118,189 USD, which is a significant saving compared to other commercially available systems with similar functionality. This vehicle continues to serve as our primary AV research and development platform.


Assuntos
Condução de Veículo , Veículos Autônomos , Inteligência Artificial , Conservação de Recursos Energéticos , Análise Custo-Benefício
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA