Your browser doesn't support javascript.
loading
A Deep Reinforcement Learning Strategy for Surrounding Vehicles-Based Lane-Keeping Control.
Kim, Jihun; Park, Sanghoon; Kim, Jeesu; Yoo, Jinwoo.
Afiliação
  • Kim J; Graduate School of Automotive Engineering, Kookmin University, Seoul 02707, Republic of Korea.
  • Park S; Graduate School of Automotive Engineering, Kookmin University, Seoul 02707, Republic of Korea.
  • Kim J; Departments of Cogno-Mechatronics Engineering and Optics and Mechatronics Engineering, Pusan National University, Busan 46241, Republic of Korea.
  • Yoo J; Department of Automobile and IT Convergence, Kookmin University, Seoul 02707, Republic of Korea.
Sensors (Basel) ; 23(24)2023 Dec 15.
Article em En | MEDLINE | ID: mdl-38139694
ABSTRACT
As autonomous vehicles (AVs) are advancing to higher levels of autonomy and performance, the associated technologies are becoming increasingly diverse. Lane-keeping systems (LKS), corresponding to a key functionality of AVs, considerably enhance driver convenience. With drivers increasingly relying on autonomous driving technologies, the importance of safety features, such as fail-safe mechanisms in the event of sensor failures, has gained prominence. Therefore, this paper proposes a reinforcement learning (RL) control method for lane-keeping, which uses surrounding object information derived through LiDAR sensors instead of camera sensors for LKS. This approach uses surrounding vehicle and object information as observations for the RL framework to maintain the vehicle's current lane. The learning environment is established by integrating simulation tools, such as IPG CarMaker, which incorporates vehicle dynamics, and MATLAB Simulink for data analysis and RL model creation. To further validate the applicability of the LiDAR sensor data in real-world settings, Gaussian noise is introduced in the virtual simulation environment to mimic sensor noise in actual operational conditions.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article