RESUMO
Mobile monitoring provides robust measurements of air pollution. However, resource constraints often limit the number of measurements so that assessments cannot be obtained in all locations of interest. In response, surrogate measurement methodologies, such as videos and images, have been suggested. Previous studies of air pollution and images have used static images (e.g., satellite images or Google Street View images). The current study was designed to develop deep learning methodologies to infer on-road pollutant concentrations from videos acquired with dashboard cameras. Fifty hours of on-road measurements of four pollutants (black carbon, particle number concentration, PM2.5 mass concentration, carbon dioxide) in Bengaluru, India, were analyzed. The analysis of each video frame involved identifying objects and determining motion (by segmentation and optical flow). Based on these visual cues, a regression convolutional neural network (CNN) was used to deduce pollution concentrations. The findings showed that the CNN approach outperformed several other machine learning (ML) techniques and more conventional analyses (e.g., linear regression). The CO2 prediction model achieved a normalized root-mean-square error of 10-13.7% for the different train-validation division methods. The results here thus contribute to the literature by using video and the relative motion of on-screen objects rather than static images and by implementing a rapid-analysis approach enabling analysis of the video in real time. These methods can be applied to other mobile-monitoring campaigns since the only additional equipment they require is an inexpensive dashboard camera.
Assuntos
Poluentes Atmosféricos , Poluição do Ar , Poluentes Ambientais , Poluentes Atmosféricos/análise , Material Particulado/análise , Monitoramento Ambiental/métodos , Sinais (Psicologia) , Índia , Poluição do Ar/análise , Redes Neurais de Computação , Poluentes Ambientais/análiseRESUMO
Data analysis has increasingly relied on machine learning in recent years. Since machines implement mathematical algorithms without knowing the physical nature of the problem, they may be accurate but lack the flexibility to move across different domains. This manuscript presents a machine-educating approach where a machine is equipped with a physical model, universal building blocks, and an unlabeled dataset from which it derives its decision criteria. Here, the concept of machine education is deployed to identify thin layers of organic materials using hyperspectral imaging (HSI). The measured spectra formed a nonlinear mixture of the unknown background materials and the target material spectra. The machine was educated to resolve this nonlinear mixing and identify the spectral signature of the target materials. The inputs for educating and testing the machine were a nonlinear mixing model, the spectra of the pure target materials (which are problem invariant), and the unlabeled HSI data. The educated machine is accurate, and its generalization capabilities outperform classical machines. When using the educated machine, the number of falsely identified samples is ~ 100 times lower than the classical machine. The probability for detection with the educated machine is 96% compared to 90% with the classical machine.
Assuntos
Imageamento Hiperespectral , Aprendizado de Máquina , Algoritmos , Máquina de Vetores de SuporteRESUMO
Catastrophic gas leak events require human First Responder Teams (FRTs) to map hazardous areas (red zones). The initial task of FRT in such events is to assess the risk according to the pollution level and to quickly evacuate civilians to prevent casualties. These teams risk their lives by manually mapping the gas dispersion. This process is currently performed using hand-held gas detectors and requires dense and exhaustive monitoring to achieve reliable maps. However, the conventional mapping process is impaired due to limited human mobility and monitoring capacities. In this context, this paper presents a method for gas sensing using unmanned aerial vehicles. The research focuses on developing a custom path planner-Boundary Red Emission Zone Estimation (BREEZE). BREEZE is an estimation approach that allows efficient red zone delineation by following its boundary. The presented approach improves the gas dispersion mapping process by performing adaptive path planning, monitoring gas dispersion in real time, and analyzing the measurements online. This approach was examined by simulating a cluttered urban site in different environmental conditions. The simulation results show the ability to autonomously perform red zone estimation faster than methods that rely on predetermined paths and with a precision higher than ninety percent.
Assuntos
Monitoramento Ambiental , Dispositivos Aéreos não Tripulados , Monitoramento Ambiental/métodos , Poluição Ambiental , HumanosRESUMO
Air pollution is one of the prime adverse environmental outcomes of urbanization and industrialization. The first step toward air pollution mitigation is monitoring and identifying its source(s). The deployment of a sensor array always involves a tradeoff between cost and performance. The performance of the network heavily depends on optimal deployment of the sensors. The latter is known as the location-allocation problem. Here, a new approach drawing on information theory is presented, in which air pollution levels at different locations are computed using a Lagrangian atmospheric dispersion model under various meteorological conditions. The sensors are then placed in those locations identified as the most informative. Specifically, entropy is used to quantify the locations' informativity. This entropy method is compared to two commonly used heuristics for solving the location-allocation problem. In the first, sensors are randomly deployed; in the second, the sensors are placed according to maximal cumulative pollution levels (i.e., hot spots). Two simulated scenarios were evaluated: one containing point sources and buildings and the other containing line sources (i.e., roads). The entropy method resulted in superior sensor deployment in terms of source apportionment and dense pollution field reconstruction from the sparse sensors' network measurements.
Assuntos
Poluição do Ar , Teoria da Informação , Monitoramento Ambiental/métodosRESUMO
Industrial activities involve the manipulation of harmful chemicals. As there is no way to guarantee fail-safe operation, the means and response methods must be planned in advance to cope with a chemical disaster. In these situations, first responders assess the situation from the atmospheric conditions, but they have scant data on the source of the contamination, which curtails their response toolbox. Hence, a sensor deployment strategy needs to be formulated in real-time based on the meteorological conditions, sensor attributes, and resources. This work examined the tradeoff between sensor locations and their attributes. The findings show that if the sensor locations are optimal, the number is more important than quality, in that the sensors' dynamic range is a significant factor when quantifying leaks but is less important if the goal is solely to locate the leak source/s. This methodology can be used for sensor location-allocation under real-life conditions and technological constraints.
Assuntos
Tecnologia sem FioRESUMO
The psychiatric diagnostic procedure is currently based on self-reports that are subject to personal biases. Therefore, the diagnostic process would benefit greatly from data-driven tools that can enhance accuracy and specificity. In recent years, many studies have achieved promising results in detecting and diagnosing depression based on machine learning (ML) analysis. Despite these favorable results in depression diagnosis, which are primarily based on ML analysis of neuroimaging data, most patients do not have access to neuroimaging tools. Hence, objective assessment tools are needed that can be easily integrated into the routine psychiatric diagnostic process. One solution is to use behavioral data, which can be easily collected while still maintaining objectivity. The current paper summarizes the main ML-based approaches that use behavioral data in diagnosing depression and other psychiatric disorders. We classified these studies into two main categories: (a) laboratory-based assessments and (b) data mining, the latter of which we further divided into two sub-groups: (i) social media usage and movement sensors data and (ii) demographic and clinical information. The paper discusses the advantages and challenges in this field and suggests future research directions and implementations. The paper's overarching aim is to serve as a first step in synthetizing existing knowledge about ML-based behavioral diagnosis studies in order to develop interventions and individually tailored treatments in the future.
RESUMO
In light of the need for objective mechanism-based diagnostic tools, the current research describes a novel diagnostic support system aimed to differentiate between anxiety and depression disorders in a clinical sample. Eighty-six psychiatric patients with clinical anxiety and/or depression were recruited from a public hospital and assigned to one of the experimental groups: Depression, Anxiety, or Mixed. The control group included 25 participants with no psychiatric diagnosis. Participants performed a battery of six cognitive-behavioral tasks assessing biases of attention, expectancies, memory, interpretation and executive functions. Data were analyzed with a machine-learning (ML) random forest-based algorithm and cross-validation techniques. The model assigned participants to clinical groups based solely on their aggregated cognitive performance. By detecting each group's unique performance pattern and the specific measures contributing to the prediction, the ML algorithm predicted diagnosis classification in two models: (I) anxiety/depression/mixed vs. control (76.81% specificity, 69.66% sensitivity), and (II) anxiety group vs. depression group (80.50% and 66.46% success rates in classifying anxiety and depression, respectively). The findings demonstrate that the cognitive battery can be utilized as a support system for psychiatric diagnosis alongside the clinical interview. This implicit tool, which is not based on self-report, is expected to enable the clinician to achieve increased diagnostic specificity and precision. Further, this tool may increase the confidence of both clinician and patient in the diagnosis by equipping them with an objective assessment tool. Finally, the battery provides a profile of biased cognitions that characterizes the patient, which in turn enables more fine-tuned, individually-tailored therapy.
Assuntos
Ansiedade , Depressão , Ansiedade/diagnóstico , Transtornos de Ansiedade , Depressão/diagnóstico , Humanos , Aprendizado de Máquina , AutorrelatoRESUMO
Anxiety and depression are distinct-albeit overlapping-psychiatric diseases, currently diagnosed by self-reported-symptoms. This research presents a new diagnostic methodology, which tests rigorously for differences in cognitive biases among subclinical anxious and depressed individuals. 125 participants were divided into four groups based on the levels of their anxiety and depression symptoms. A comprehensive behavioral test battery detected and quantified various cognitive-emotional biases. Advanced machine-learning tools, developed for this study, analyzed these results. These tools detect unique patterns that characterize anxiety versus depression to predict group membership. The prediction model for differentiating between symptomatic participants (i.e., high symptoms of depression, anxiety, or both) compared to the non-symptomatic control group revealed a 71.44% prediction accuracy for the former (sensitivity) and 70.78% for the latter (specificity). 68.07% and 74.18% prediction accuracy was obtained for a two-group model with high depression/anxiety, respectively. The analysis also disclosed which specific behavioral measures contributed to the prediction, pointing to key cognitive mechanisms in anxiety versus depression. These results lay the ground for improved diagnostic instruments and more effective and focused individually-based treatment.
Assuntos
Transtornos de Ansiedade/psicologia , Ansiedade/psicologia , Depressão/psicologia , Adulto , Cognição/fisiologia , Emoções/fisiologia , Feminino , Humanos , Aprendizado de Máquina , Masculino , AutorrelatoRESUMO
Efficient and safe detection of Bacillus anthracis spores (BAS) is a challenging task especially in bio-terror scenarios where the agent is concealed. We provide a proof-of-concept for the identification of concealed BAS inside mail envelopes using short-wave infrared hyperspectral imaging (SWIR-HSI). The spores and two other benign materials are identified according to their typical absorption spectrum. The identification process is based on the removal of the envelope signal using a new automatic new algorithm. This method may serve as a fast screening tool prior to using classical bioanalytical techniques.
Assuntos
Bacillus anthracis/isolamento & purificação , Raios Infravermelhos , Análise Espectral/métodos , Esporos Bacterianos/isolamento & purificação , Algoritmos , Bioterrorismo , Ciências Forenses/métodos , Humanos , Serviços PostaisRESUMO
Water is a resource that affects every aspect of life. Intentional (terrorist or wartime events) or accidental water contamination events could have a tremendous impact on public health, behavior and morale. Quick detection of such events can mitigate their effects and reduce the potential damage. Continuous on-line monitoring is the first line of defense for reducing contamination associated damage. One of the available tools for such detection is the UV-absorbance spectrophotometry, where the absorbance spectra are compared against a set of normal and contaminated water fingerprints. However, as there are many factors at play that affect this comparison, it is an elusive and tedious task. Further, the comparison against a set of known fingerprints is futile when the water in the supply system are a mix, with varying proportions, of water from different sources, which differ significantly in their physicochemical characteristics. This study presents a new scheme for early detection of contamination events through UV absorbance under unknown routine conditions. The detection mechanism is based on a new affinity measure, Fitness, and a procedure similar to Gram based amplification, which result in a flexible mechanism to alert if a contamination is present. The method was shown to be most effective when applied to a set of comprehensive experiments, which examined the absorbance of various contaminants in drinking water in lab and real-life configurations. Four datasets, which contained real readings from either laboratory experiments or monitoring station of an operational water supply system were used. To extend the testbed even further, an artificial dataset, simulating a vast array of proportions between specific water sources is also presented. The results show, that for all datasets, high detection rates, while maintaining low levels of false alarms, were obtained by the algorithm.
Assuntos
Abastecimento de Água , Água , Aprendizado de Máquina , Espectrofotometria , Poluição da ÁguaRESUMO
Low-cost air quality sensors offer high-resolution spatiotemporal measurements that can be used for air resources management and exposure estimation. Yet, such sensors require frequent calibration to provide reliable data, since even after a laboratory calibration they might not report correct values when they are deployed in the field, due to interference with other pollutants, as a result of sensitivity to environmental conditions and due to sensor aging and drift. Field calibration has been suggested as a means for overcoming these limitations, with the common strategy involving periodical collocations of the sensors at an air quality monitoring station. However, the cost and complexity involved in relocating numerous sensor nodes back and forth, and the loss of data during the repeated calibration periods make this strategy inefficient. This work examines an alternative approach, a node-to-node (N2N) calibration, where only one sensor in each chain is directly calibrated against the reference measurements and the rest of the sensors are calibrated sequentially one against the other while they are deployed and collocated in pairs. The calibration can be performed multiple times as a routine procedure. This procedure minimizes the total number of sensor relocations, and enables calibration while simultaneously collecting data at the deployment sites. We studied N2N chain calibration and the propagation of the calibration error analytically, computationally and experimentally. The in-situ N2N calibration is shown to be generic and applicable for different pollutants, sensing technologies, sensor platforms, chain lengths, and sensor order within the chain. In particular, we show that chain calibration of three nodes, each calibrated for a week, propagate calibration errors that are similar to those found in direct field calibration. Hence, N2N calibration is shown to be suitable for calibration of distributed sensor networks.
Assuntos
Poluentes Atmosféricos/análise , Monitoramento Ambiental/instrumentação , Poluição do Ar/análise , Calibragem , Monitoramento Ambiental/métodos , Tecnologia sem FioRESUMO
Maintaining water quality is critical for any water distribution company. One of the major concerns in water quality assurance, is bacterial contamination in water sources. To date, bacteria growth models cannot predict with sufficient accuracy when a bacteria outburst will occur in a water well. This is partly due to the natural sparsity of the bacteria count time series, which hinders the observation of deviations from normal behavior. This precludes the application of mathematical models nor statistical quality control methods for the detection of high bacteria counts before contamination occurs. As a result, currently a future outbreak prediction is a subjective process. This research developed a new cost-effective method that capitalizes on the sparsity of the bacteria count time series. The presented method first transforms the data into its spectral representation, where it is no longer sparse. Capitalizing on the spectral representation the dimensions of the problem are reduced. Machine learning methods are then applied on the reduced representations for predicting bacteria outbursts from the bacterial counts history of a well. The results show that these tools can be implemented by the water quality engineering community to create objective, more robust, quality control techniques to ensure safer water distribution.
Assuntos
Modelos Teóricos , Vigilância em Saúde Pública , Microbiologia da Água , Abastecimento de Água , Poços de Água , Algoritmos , Monitoramento AmbientalRESUMO
The emergence of low-cost, user-friendly and very compact air pollution platforms enable observations at high spatial resolution in near-real-time and provide new opportunities to simultaneously enhance existing monitoring systems, as well as engage citizens in active environmental monitoring. This provides a whole new set of capabilities in the assessment of human exposure to air pollution. However, the data generated by these platforms are often of questionable quality. We have conducted an exhaustive evaluation of 24 identical units of a commercial low-cost sensor platform against CEN (European Standardization Organization) reference analyzers, evaluating their measurement capability over time and a range of environmental conditions. Our results show that their performance varies spatially and temporally, as it depends on the atmospheric composition and the meteorological conditions. Our results show that the performance varies from unit to unit, which makes it necessary to examine the data quality of each node before its use. In general, guidance is lacking on how to test such sensor nodes and ensure adequate performance prior to marketing these platforms. We have implemented and tested diverse metrics in order to assess if the sensor can be employed for applications that require high accuracy (i.e., to meet the Data Quality Objectives defined in air quality legislation, epidemiological studies) or lower accuracy (i.e., to represent the pollution level on a coarse scale, for purposes such as awareness raising). Data quality is a pertinent concern, especially in citizen science applications, where citizens are collecting and interpreting the data. In general, while low-cost platforms present low accuracy for regulatory or health purposes they can provide relative and aggregated information about the observed air quality.
Assuntos
Poluentes Atmosféricos/análise , Poluição do Ar/análise , Exposição Ambiental , Monitoramento Ambiental/métodos , Calibragem , Monitoramento Ambiental/economia , Monitoramento Ambiental/instrumentação , Humanos , Noruega , Fatores de Tempo , Tempo (Meteorologia)RESUMO
Recent developments in sensory and communication technologies have made the development of portable air-quality (AQ) micro-sensing units (MSUs) feasible. These MSUs allow AQ measurements in many new applications, such as ambulatory exposure analyses and citizen science. Typically, the performance of these devices is assessed using the mean error or correlation coefficients with respect to a laboratory equipment. However, these criteria do not represent how such sensors perform outside of laboratory conditions in large-scale field applications, and do not cover all aspects of possible differences in performance between the sensor-based and standardized equipment, or changes in performance over time. This paper presents a comprehensive Sensor Evaluation Toolbox (SET) for evaluating AQ MSUs by a range of criteria, to better assess their performance in varied applications and environments. Within the SET are included four new schemes for evaluating sensors' capability to: locate pollution sources; represent the pollution level on a coarse scale; capture the high temporal variability of the observed pollutant and their reliability. Each of the evaluation criteria allows for assessing sensors' performance in a different way, together constituting a holistic evaluation of the suitability and usability of the sensors in a wide range of applications. Application of the SET on measurements acquired by 25 MSUs deployed in eight cities across Europe showed that the suggested schemes facilitates a comprehensive cross platform analysis that can be used to determine and compare the sensors' performance. The SET was implemented in R and the code is available on the first author's website.
RESUMO
Recent advances in sensory and communication technologies have made Wireless Distributed Environmental Sensory Networks (WDESN) technically and economically feasible. WDESNs present an unprecedented tool for studying many environmental processes in a new way. However, the WDESNs' calibration process is a major obstacle in them becoming the common practice. Here, we present a new, robust and efficient method for aggregating measurements acquired by an uncalibrated WDESN, and producing accurate estimates of the observed environmental variable's true levels rendering the network as self-calibrated. The suggested method presents novelty both in group-decision-making and in environmental sensing as it offers a most valuable tool for distributed environmental monitoring data aggregation. Applying the method on an extensive real-life air-pollution dataset showed markedly more accurate results than the common practice and the state-of-the-art.
RESUMO
Air pollution has a proven impact on public health. Currently, pollutant levels are obtained by high-priced, sizeable, stationary Air Quality Monitoring (AQM) stations. Recent developments in sensory and communication technologies have made relatively low-cost, micro-sensing units (MSUs) feasible. Their lower power consumption and small size enable mobile sensing, deploying single or multiple units simultaneously. Recent studies have reported on measurements acquired by mobile MSUs, mounted on cars, bicycles and pedestrians. While these modes of transportation inherently present different velocity and acceleration regimes, the effect of the sensors' varying movement characteristics have not been previously accounted for. This research assesses the impact of sensor's motion on its functionality through laboratory measurements and a field campaign. The laboratory setup consists of a wind tunnel to assess the effect of air flow on the measurements of nitrogen dioxide and ozone at different velocities in a controlled environment, while the field campaign is based on three cars mounted with MSUs, measuring pollutants and environmental variables at different traveling speeds. In both experimental designs we can regard the MSUs as a moving object in the environment, i.e. having a distinct ego-motion. The results show that MSU's behavior is highly affected by variation in speed and sensor placement with respect to direction of movement, mainly due to the physical properties of installed sensors. This strongly suggests that any future design of MSU must account for the speed effect from the design stage all the way through deployment and results analysis. This is the first report examining the influence of airflow variations on MSU's ability to accurately measure pollutant levels.
Assuntos
Poluentes Atmosféricos/análise , Monitoramento Ambiental/métodos , Poluição do Ar , Dióxido de Nitrogênio/análise , Ozônio/análise , Projetos de PesquisaRESUMO
Accurate evaluation of air pollution on human-wellbeing requires high-resolution measurements. Standard air quality monitoring stations provide accurate pollution levels but due to their sparse distribution they cannot capture the highly resolved spatial variations within cities. Similarly, dedicated field campaigns can use tens of measurement devices and obtain highly dense spatial coverage but normally deployment has been limited to short periods of no more than few weeks. Nowadays, advances in communication and sensory technologies enable the deployment of dense grids of wireless distributed air monitoring nodes, yet their sensor ability to capture the spatiotemporal pollutant variability at the sub-neighborhood scale has never been thoroughly tested. This study reports ambient measurements of gaseous air pollutants by a network of six wireless multi-sensor miniature nodes that have been deployed in three urban sites, about 150 m apart. We demonstrate the network's capability to capture spatiotemporal concentration variations at an exceptional fine resolution but highlight the need for a frequent in-situ calibration to maintain the consistency of some sensors. Accordingly, a procedure for a field calibration is proposed and shown to improve the system's performance. Overall, our results support the compatibility of wireless distributed sensor networks for measuring urban air pollution at a sub-neighborhood spatial resolution, which suits the requirement for highly spatiotemporal resolved measurements at the breathing-height when assessing exposure to urban air pollution.
Assuntos
Poluentes Atmosféricos/análise , Monitoramento Ambiental/instrumentação , Tecnologia sem Fio , Estudos de ViabilidadeRESUMO
Of the numerous factors that play a role in fatal pedestrian collisions, the time of day, day of the week, and time of year can be significant determinants. More than 60% of all pedestrian collisions in 2007 occurred at night, despite the presumed decrease in both pedestrian and automobile exposure during the night. Although this trend is partially explained by factors such as fatigue and alcohol consumption, prior analysis of the Fatality Analysis Reporting System database suggests that pedestrian fatalities increase as light decreases after controlling for other factors. This study applies graphical cross-tabulation, a novel visual assessment approach, to explore the relationships among collision variables. The results reveal that twilight and the first hour of darkness typically observe the greatest frequency of pedestrian fatal collisions. These hours are not necessarily the most risky on a per mile travelled basis, however, because pedestrian volumes are often still high. Additional analysis is needed to quantify the extent to which pedestrian exposure (walking/crossing activity) in these time periods plays a role in pedestrian crash involvement. Weekly patterns of pedestrian fatal collisions vary by time of year due to the seasonal changes in sunset time. In December, collisions are concentrated around twilight and the first hour of darkness throughout the week while, in June, collisions are most heavily concentrated around twilight and the first hours of darkness on Friday and Saturday. Friday and Saturday nights in June may be the most dangerous times for pedestrians. Knowing when pedestrian risk is highest is critically important for formulating effective mitigation strategies and for efficiently investing safety funds. This applied visual approach is a helpful tool for researchers intending to communicate with policy-makers and to identify relationships that can then be tested with more sophisticated statistical tools.
Assuntos
Acidentes de Trânsito/mortalidade , Ritmo Circadiano , Escuridão , Caminhada/lesões , Ferimentos e Lesões/mortalidade , California , Estudos Transversais , Comportamento Perigoso , Humanos , Fatores de Risco , Estações do Ano , População Suburbana/estatística & dados numéricos , População Urbana/estatística & dados numéricosRESUMO
In many applications, sampled data are collected in irregular fashion or are partly lost or unavailable. In these cases, it is necessary to convert irregularly sampled signals to regularly sampled ones or to restore missing data. We address this problem in the framework of a discrete sampling theorem for band-limited discrete signals that have a limited number of nonzero transform coefficients in a certain transform domain. Conditions for the image unique recovery, from sparse samples, are formulated and then analyzed for various transforms. Applications are demonstrated on examples of image superresolution and image reconstruction from sparse projections.