Your browser doesn't support javascript.
loading
Gas concentration mapping and source localization for environmental monitoring through unmanned aerial systems using model-free reinforcement learning agents.
Husnain, Anees Ul; Mokhtar, Norrima; Mohamed Shah, Noraisyah Binti; Dahari, Mahidzal Bin; Azmi, Amirul Asyhraff; Iwahashi, Masahiro.
Afiliación
  • Husnain AU; Department of Electrical Engineering, University of Malaya, Kuala Lumpur, Malaysia.
  • Mokhtar N; Department of Computer Systems Engineering, Faculty of Engineering, The Islamia University of Bahawalpur, Bahawalpur, Pakistan.
  • Mohamed Shah NB; Department of Electrical Engineering, University of Malaya, Kuala Lumpur, Malaysia.
  • Dahari MB; Department of Electrical Engineering, University of Malaya, Kuala Lumpur, Malaysia.
  • Azmi AA; Department of Electrical Engineering, University of Malaya, Kuala Lumpur, Malaysia.
  • Iwahashi M; Department of Electrical Engineering, University of Malaya, Kuala Lumpur, Malaysia.
PLoS One ; 19(2): e0296969, 2024.
Article en En | MEDLINE | ID: mdl-38394180
ABSTRACT
There are three primary objectives of this work; first to establish a gas concentration map; second to estimate the point of emission of the gas; and third to generate a path from any location to the point of emission for UAVs or UGVs. A mountable array of MOX sensors was developed so that the angles and distances among the sensors, alongside sensors data, were utilized to identify the influx of gas plumes. Gas dispersion experiments under indoor conditions were conducted to train machine learning algorithms to collect data at numerous locations and angles. Taguchi's orthogonal arrays for experiment design were used to identify the gas dispersion locations. For the second objective, the data collected after pre-processing was used to train an off-policy, model-free reinforcement learning agent with a Q-learning policy. After finishing the training from the training data set, Q-learning produces a table called the Q-table. The Q-table contains state-action pairs that generate an autonomous path from any point to the source from the testing dataset. The entire process is carried out in an obstacle-free environment, and the whole scheme is designed to be conducted in three modes search, track, and localize. The hyperparameter combinations of the RL agent were evaluated through trial-and-error technique and it was found that ε = 0.9, γ = 0.9 and α = 0.9 was the fastest path generating combination that took 1258.88 seconds for training and 6.2 milliseconds for path generation. Out of 31 unseen scenarios, the trained RL agent generated successful paths for all the 31 scenarios, however, the UAV was able to reach successfully on the gas source in 23 scenarios, producing a success rate of 74.19%. The results paved the way for using reinforcement learning techniques to be used as autonomous path generation of unmanned systems alongside the need to explore and improve the accuracy of the reported results as future works.
Asunto(s)

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Contexto en salud: 2_ODS3 Problema de salud: 2_quimicos_contaminacion Asunto principal: Algoritmos / Monitoreo del Ambiente Idioma: En Revista: PLoS One Asunto de la revista: CIENCIA / MEDICINA Año: 2024 Tipo del documento: Article País de afiliación: Malasia

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Contexto en salud: 2_ODS3 Problema de salud: 2_quimicos_contaminacion Asunto principal: Algoritmos / Monitoreo del Ambiente Idioma: En Revista: PLoS One Asunto de la revista: CIENCIA / MEDICINA Año: 2024 Tipo del documento: Article País de afiliación: Malasia
...