Your browser doesn't support javascript.
loading
Urban Air-Quality Estimation Using Visual Cues and a Deep Convolutional Neural Network in Bengaluru (Bangalore), India.
Feldman, Alon; Kendler, Shai; Marshall, Julian; Kushwaha, Meenakshi; Sreekanth, V; Upadhya, Adithi R; Agrawal, Pratyush; Fishbain, Barak.
Afiliação
  • Feldman A; Department of Mathematics, Technion-Israel Institute of Technology, Haifa 3200003, Israel.
  • Kendler S; Department of Environmental, Water and Agricultural Engineering, Faculty of Civil & Environmental Engineering, Technion-Israel Institute of Technology, Haifa 3200003, Israel.
  • Marshall J; Environmental Physics Department, Israel Institute for Biological Research, Ness Ziona 7410001, Israel.
  • Kushwaha M; Department of Civil and Environmental Engineering, University of Washington, Seattle, Washington 98195, United States.
  • Sreekanth V; ILK Laboratories, Bengaluru 560046, India.
  • Upadhya AR; Center for Study of Science, Technology & Policy, Bengaluru 560094, India.
  • Agrawal P; ILK Laboratories, Bengaluru 560046, India.
  • Fishbain B; Department of Public Health, Policy & Systems, University of Liverpool, Liverpool L69 3GF, England.
Environ Sci Technol ; 58(1): 480-487, 2024 Jan 09.
Article em En | MEDLINE | ID: mdl-38104325
ABSTRACT
Mobile monitoring provides robust measurements of air pollution. However, resource constraints often limit the number of measurements so that assessments cannot be obtained in all locations of interest. In response, surrogate measurement methodologies, such as videos and images, have been suggested. Previous studies of air pollution and images have used static images (e.g., satellite images or Google Street View images). The current study was designed to develop deep learning methodologies to infer on-road pollutant concentrations from videos acquired with dashboard cameras. Fifty hours of on-road measurements of four pollutants (black carbon, particle number concentration, PM2.5 mass concentration, carbon dioxide) in Bengaluru, India, were analyzed. The analysis of each video frame involved identifying objects and determining motion (by segmentation and optical flow). Based on these visual cues, a regression convolutional neural network (CNN) was used to deduce pollution concentrations. The findings showed that the CNN approach outperformed several other machine learning (ML) techniques and more conventional analyses (e.g., linear regression). The CO2 prediction model achieved a normalized root-mean-square error of 10-13.7% for the different train-validation division methods. The results here thus contribute to the literature by using video and the relative motion of on-screen objects rather than static images and by implementing a rapid-analysis approach enabling analysis of the video in real time. These methods can be applied to other mobile-monitoring campaigns since the only additional equipment they require is an inexpensive dashboard camera.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Poluentes Atmosféricos / Poluição do Ar / Poluentes Ambientais Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Poluentes Atmosféricos / Poluição do Ar / Poluentes Ambientais Idioma: En Ano de publicação: 2024 Tipo de documento: Article