RESUMO
This paper is devoted to the sensor selection problem. A broadband receiver beamforming working in a near-field is considered. The system response should be as close as possible to the desired one, which is optimized in the sense of L2 norm. The problem considered is at least NP-hard. Therefore, the branch-and-bound algorithm is developed to solve the problem. The proposed approach is universal and can be applied not only to microphone arrays but also to antenna arrays; that is, the methodology for the generation of consecutive solutions can be applied to different types of sensor selection problems. Next, for a larger microphone array, an efficient metaheuristic algorithm is constructed. The algorithm implemented is a hybrid genetic algorithm based on the ITÖ process. Numerical experiments show that the proposed approach can be successfully applied to the sensor selection problem.
RESUMO
Crop height and biomass are the two important phenotyping traits to screen forage population types at local and regional scales. This study aims to compare the performances of multispectral and RGB sensors onboard drones for quantitative retrievals of forage crop height and biomass at very high resolution. We acquired the unmanned aerial vehicle (UAV) multispectral images (MSIs) at 1.67 cm spatial resolution and visible data (RGB) at 0.31 cm resolution and measured the forage height and above-ground biomass over the alfalfa (Medicago sativa L.) breeding trials in the Canadian Prairies. (1) For height estimation, the digital surface model (DSM) and digital terrain model (DTM) were extracted from MSI and RGB data, respectively. As the resolution of the DTM is five times less than that of the DSM, we applied an aggregation algorithm to the DSM to constrain the same spatial resolution between DSM and DTM. The difference between DSM and DTM was computed as the canopy height model (CHM), which was at 8.35 cm and 1.55 cm for MSI and RGB data, respectively. (2) For biomass estimation, the normalized difference vegetation index (NDVI) from MSI data and excess green (ExG) index from RGB data were analyzed and regressed in terms of ground measurements, leading to empirical models. The results indicate better performance of MSI for above-ground biomass (AGB) retrievals at 1.67 cm resolution and better performance of RGB data for canopy height retrievals at 1.55 cm. Although the retrieved height was well correlated with the ground measurements, a significant underestimation was observed. Thus, we developed a bias correction function to match the retrieval with the ground measurements. This study provides insight into the optimal selection of sensor for specific targeted vegetation growth traits in a forage crop.
Assuntos
Biomassa , Algoritmos , Dispositivos Aéreos não Tripulados , Medicago sativa/crescimento & desenvolvimento , Produtos Agrícolas/crescimento & desenvolvimentoRESUMO
The measurement of respiratory volume based on upper body movements by means of a smart shirt is increasingly requested in medical applications. This research used upper body surface motions obtained by a motion capture system, and two regression methods to determine the optimal selection and placement of sensors on a smart shirt to recover respiratory parameters from benchmark spirometry values. The results of the two regression methods (Ridge regression and the least absolute shrinkage and selection operator (Lasso)) were compared. This work shows that the Lasso method offers advantages compared to the Ridge regression, as it provides sparse solutions and is more robust to outliers. However, both methods can be used in this application since they lead to a similar sensor subset with lower computational demand (from exponential effort for full exhaustive search down to the order of O (n2)). A smart shirt for respiratory volume estimation could replace spirometry in some cases and would allow for a more convenient measurement of respiratory parameters in home care or hospital settings.
Assuntos
Benchmarking , Serviços de Assistência Domiciliar , Humanos , Modelos Lineares , Volume de Ventilação Pulmonar , HospitaisRESUMO
In a world of rapidly changing technologies, reliance on complex engineered systems has become substantial. Interactions associated with such systems as well as associated manufacturing processes also continue to evolve and grow in complexity. Consider how the complexity of manufacturing processes makes engineered systems vulnerable to cascading and escalating failures; truly a highly complex and evolving system of systems. Maintaining quality and reliability requires considerations during product development, manufacturing processes, and more. Monitoring the health of the complex system while in operation/use is imperative. These considerations have compelled designers to explore fault-mechanism models and to develop corresponding countermeasures. Increasingly, there has been a reliance on embedded sensors to aid in prognosticating failures, to reduce downtime, during manufacture and system operation. However, the accuracy of estimating the remaining useful life of the system is highly dependent on the quality of the data obtained. This can be enhanced by increasing the number of sensors used, according to information theory. However, adding sensors increases total costs with the cost of the sensors and the costs associated with information-gathering procedures. Determining the optimal number of sensors, associated operating and data acquisition costs, and sensor-configuration are nontrivial. It is also imperative to avoid redundant information due to the presence of additional sensors and the efficient display of information to the decision-maker. Therefore, it is necessary to select a subset of sensors that not only reduce the cost but are also informative. While progress has been made in the sensor selection process, it is limited to either the type of the sensor, number of sensors or both. Such approaches do not address specifications of the required sensors which are integral to the sensor selection process. This paper addresses these shortcomings through a new method, OFCCaTS, to avoid the increased cost associated with health monitoring and to improve its accuracy. The proposed method utilizes a scalable multi-objective framework for sensor selection to maximize fault detection rate while minimizing the total cost of sensors. A wind turbine gearbox is considered to demonstrate the efficacy of the proposed framework.
Assuntos
Algoritmos , Reprodutibilidade dos TestesRESUMO
To comply with the increasing complexity of new mechatronic systems and stricter safety regulations, advanced estimation algorithms are currently undergoing a transformation towards higher model complexity. However, more complex models often face issues regarding the observability and computational effort needed. Moreover, sensor selection is often still conducted pragmatically based on experience and convenience, whereas a more cost-effective approach would be to evaluate the sensor performance based on its effective estimation performance. In this work, a novel estimation and sensor selection approach is presented that is able to stabilise the estimator Riccati equation for unobservable and non-linear system models. This is possible when estimators only target some specific quantities of interest that do not necessarily depend on all system states. An Extended Kalman Filter-based estimation framework is proposed where the Riccati equation is projected onto an observable subspace based on a Singular Value Decomposition (SVD) of the Kalman observability matrix. Furthermore, a sensor selection methodology is proposed, which ranks the possible sensors according to their estimation performance, as evaluated by the error covariance of the quantities of interest. This allows evaluating the performance of a sensor set without the need for costly test campaigns. Finally, the proposed methods are evaluated on a numerical example, as well as an automotive experimental validation case.
Assuntos
AlgoritmosRESUMO
The Internet of Things (IoT)-based target tracking system is required for applications such as smart farm, smart factory, and smart city where many sensor devices are jointly connected to collect the moving target positions. Each sensor device continuously runs on battery-operated power, consuming energy while perceiving target information in a particular environment. To reduce sensor device energy consumption in real-time IoT tracking applications, many traditional methods such as clustering, information-driven, and other approaches have previously been utilized to select the best sensor. However, applying machine learning methods, particularly deep reinforcement learning (Deep RL), to address the problem of sensor selection in tracking applications is quite demanding because of the limited sensor node battery lifetime. In this study, we proposed a long short-term memory deep Q-network (DQN)-based Deep RL target tracking model to overcome the problem of energy consumption in IoT target applications. The proposed method is utilized to select the energy-efficient best sensor while tracking the target. The best sensor is defined by the minimum distance function (i.e., derived as the state), which leads to lower energy consumption. The simulation results show favorable features in terms of the best sensor selection and energy consumption.
RESUMO
Real-time estimation of temperatures in indoor environments is critical for several reasons, including the upkeep of comfort levels, the fulfillment of legal requirements, and energy efficiency. Unfortunately, setting an adequate number of sensors at the desired locations to ensure a uniform monitoring of the temperature in a given premise may be troublesome. Virtual sensing is a set of techniques to replace a subset of physical sensors by virtual ones, allowing the monitoring of unreachable locations, reducing the sensors deployment costs, and providing a fallback solution for sensor failures. In this paper, we deal with temperature monitoring in an open space office, where a set of physical sensors is deployed at uneven locations. Our main goal is to develop a black-box virtual sensing framework, completely independent of the physical characteristics of the considered scenario, that, in principle, can be adapted to any indoor environment. We first perform a systematic analysis of various distance metrics that can be used to determine the best sensors on which to base temperature monitoring. Then, following a genetic programming approach, we design a novel metric that combines and summarizes information brought by the considered distance metrics, outperforming their effectiveness. Thereafter, we propose a general and automatic approach to the problem of determining the best subset of sensors that are worth keeping in a given room. Leveraging the selected sensors, we then conduct a comprehensive assessment of different strategies for the prediction of temperatures observed by physical sensors based on other sensors' data, also evaluating the reliability of the generated outputs. The results show that, at least in the given scenario, the proposed black-box approach is capable of automatically selecting a subset of sensors and of deriving a virtual sensing model for an accurate and efficient monitoring of the environment.
RESUMO
Boiler waterwall tube leakage is the most probable cause of failure in steam power plants (SPPs). The development of an intelligent tube leak detection system can increase the efficiency and reliability of modern power plants. The idea of e-maintenance based on multivariate algorithms was recently introduced for intelligent fault detection and diagnosis in SPPs. However, these multivariate algorithms are highly dependent on the number of input process variables (sensors). Therefore, this work proposes a machine learning-based model integrated with an optimal sensor selection scheme to analyze boiler waterwall tube leakage. Finally, a real SPP test case is employed to validate the proposed model's effectiveness. The results indicate that the proposed model can successfully detect waterwall tube leakage with improved accuracy vs. other comparable models.
RESUMO
Sensor selection plays an essential and fundamental role in prognostics and health management technology, and it is closely related to fault diagnosis, life prediction, and health assessment. The existing methods of sensor selection do not have an evaluation standard, which leads to different selection results. It is not helpful for the selection and layout of sensors. This paper proposes a comprehensive evaluation method of sensor selection for prognostics and health management (PHM) based on grey clustering. The described approach divides sensors into three grey classes, and defines and quantifies three grey indexes based on a dependency matrix. After a brief introduction to the whitening weight function, we propose a combination weight considering the objective data and subjective tendency to improve the effectiveness of the selection result. Finally, the clustering result of sensors is obtained by analyzing the clustering coefficient, which is calculated based on the grey clustering theory. The proposed approach is illustrated by an electronic control system, in which the effectiveness of different methods of sensor selection is compared. The result shows that the technique can give a convincing analysis result by evaluating the selection results of different methods, and is also very helpful for adjusting sensors to provide a more precise result. This approach can be utilized in sensor selection and evaluation for prognostics and health management.
Assuntos
Técnicas Biossensoriais , Eletrônica , Gestão da Saúde da População , Prognóstico , Algoritmos , Análise por Conglomerados , HumanosRESUMO
Engine prognostics are critical to improve safety, reliability, and operational efficiency of an aircraft. With the development in sensor technology, multiple sensors are embedded or deployed to monitor the health condition of the aircraft engine. Thus, the challenge of engine prognostics lies in how to model and predict future health by appropriate utilization of these sensor information. In this paper, a prognostic approach is developed based on informative sensor selection and adaptive degradation modeling with functional data analysis. The presented approach selects sensors based on metrics and constructs health index to characterize engine degradation by fusing the selected informative sensors. Next, the engine degradation is adaptively modeled with the functional principal component analysis (FPCA) method and future health is prognosticated using the Bayesian inference. The prognostic approach is applied to run-to-failure data sets of C-MAPSS test-bed developed by NASA. Results show that the proposed method can effectively select the informative sensors and accurately predict the complex degradation of the aircraft engine.
RESUMO
This paper considers the binary Gaussian distribution robust hypothesis testing under aBayesian optimal criterion in the wireless sensor network (WSN). The distribution covariance matrixunder each hypothesis is known, while the distribution mean vector under each hypothesis driftsin an ellipsoidal uncertainty set. Because of the limited bandwidth and energy, we aim at seeking asubset of p out of m sensors such that the best detection performance is achieved. In this setup, theminimax robust sensor selection problem is proposed to deal with the uncertainties of distributionmeans. Following a popular method, minimizing the maximum overall error probability with respectto the selection matrix can be approximated by maximizing the minimum Chernoff distance betweenthe distributions of the selected measurements under null hypothesis and alternative hypothesis tobe detected. Then, we utilize Danskin's theorem to compute the gradient of the objective functionof the converted maximization problem, and apply the orthogonal constraint-preserving gradientalgorithm (OCPGA) to solve the relaxed maximization problem without 0/1 constraints. It is shownthat the OCPGA can obtain a stationary point of the relaxed problem. Meanwhile, we provide thecomputational complexity of the OCPGA, which is much lower than that of the existing greedyalgorithm. Finally, numerical simulations illustrate that, after the same projection and refinementphases, the OCPGA-based method can obtain better solutions than the greedy algorithm-basedmethod but with up to 48.72% shorter runtimes. Particularly, for small-scale problems, the OCPGA-based method is able to attain the globally optimal solution.
RESUMO
This paper proposes a time difference of arrival (TDOA) passive positioning sensor selection method based on tabu search to balance the relationship between the positioning accuracy of the sensor network and system consumption. First, the passive time difference positioning model, taking into account the sensor position errors, is considered. Then, an approximate closed-form constrained total least-squares (CTLS) solution and a covariance matrix of the positioning error are provided. By introducing a Boolean selection vector, the sensor selection problem is transformed into an optimization problem that minimizes the trace of the positioning error covariance matrix. Thereafter, the tabu search method is employed to solve the transformed sensor selection problem. The simulation results show that the performance of the proposed sensor optimization method considerably approximates that of the exhaustive search method. Moreover, it can significantly reduce the running time and improve the timeliness of the algorithm.
RESUMO
Wearable health monitoring has emerged as a promising solution to the growing need for remote health assessment and growing demand for personalized preventative care and wellness management. Vital signs can be monitored and alerts can be made when anomalies are detected, potentially improving patient outcomes. One major challenge for the use of wearable health devices is their energy efficiency and battery-lifetime, which motivates the recent efforts towards the development of self-powered wearable devices. This article proposes a method for context aware dynamic sensor selection for power optimized physiological prediction using multi-modal wearable data streams. We first cluster the data by physical activity using the accelerometer data, and then fit a group lasso model to each activity cluster. We find the optimal reduced set of groups of sensor features, in turn reducing power usage by duty cycling these and optimizing prediction accuracy. We show that using activity state-based contextual information increases accuracy while decreasing power usage. We also show that the reduced feature set can be used in other regression models increasing accuracy and decreasing energy burden. We demonstrate the potential reduction in power usage using a custom-designed multi-modal wearable system prototype.
Assuntos
Actigrafia/instrumentação , Fontes de Energia Elétrica/economia , Telemedicina/economia , Dispositivos Eletrônicos Vestíveis/economia , Acelerometria/estatística & dados numéricos , Análise por Conglomerados , Humanos , Dispositivos Eletrônicos Vestíveis/normasRESUMO
This paper considers the problems of the posterior Cramér-Rao bound and sensor selection for multi-sensor nonlinear systems with uncertain observations. In order to effectively overcome the difficulties caused by uncertainty, we investigate two methods to derive the posterior Cramér-Rao bound. The first method is based on the recursive formula of the Cramér-Rao bound and the Gaussian mixture model. Nevertheless, it needs to compute a complex integral based on the joint probability density function of the sensor measurements and the target state. The computation burden of this method is relatively high, especially in large sensor networks. Inspired by the idea of the expectation maximization algorithm, the second method is to introduce some 0-1 latent variables to deal with the Gaussian mixture model. Since the regular condition of the posterior Cramér-Rao bound is unsatisfied for the discrete uncertain system, we use some continuous variables to approximate the discrete latent variables. Then, a new Cramér-Rao bound can be achieved by a limiting process of the Cramér-Rao bound of the continuous system. It avoids the complex integral, which can reduce the computation burden. Based on the new posterior Cramér-Rao bound, the optimal solution of the sensor selection problem can be derived analytically. Thus, it can be used to deal with the sensor selection of a large-scale sensor networks. Two typical numerical examples verify the effectiveness of the proposed methods.
RESUMO
A new optimization algorithm of sensor selection is proposed in this paper for decentralized large-scale multi-target tracking (MTT) network within a labeled random finite set (RFS) framework. The method is performed based on a marginalized δ-generalized labeled multi-Bernoulli RFS. The rule of weighted Kullback-Leibler average (KLA) is used to fuse local multi-target densities. A new metric, named as the label assignment (LA) metric, is proposed to measure the distance for two labeled sets. The lower bound of LA metric based mean square error between the labeled multi-target state set and its estimate is taken as the optimized objective function of sensor selection. The proposed bound is obtained by the information inequality to RFS measurement. Then, we present the sequential Monte Carlo and Gaussian mixture implementations for the bound. Another advantage of the bound is that it provides a basis for setting the weights of KLA. The coordinate descent method is proposed to compromise the computational cost of sensor selection and the accuracy of MTT. Simulations verify the effectiveness of our method under different signal-to- noise ratio scenarios.
RESUMO
This paper presents a comparative study on the performance of different sizes of sensor sets on polymer electrolyte membrane (PEM) fuel cell fault diagnosis. The effectiveness of three sizes of sensor sets, including fuel cell voltage only, all the available sensors, and selected optimal sensors in detecting and isolating fuel cell faults (e.g., cell flooding and membrane dehydration) are investigated using the test data from a PEM fuel cell system. Wavelet packet transform and kernel principal component analysis are employed to reduce the dimensions of the dataset and extract features for state classification. Results demonstrate that the selected optimal sensors can provide the best diagnostic performance, where different fuel cell faults can be detected and isolated with good quality.
RESUMO
Standard Bayesian filtering algorithms only work well when the statistical properties of system noises are exactly known. However, this assumption is not always plausible in real target tracking applications. In this paper, we present a new estimation approach named adaptive fifth-degree cubature information filter (AFCIF) for multi-sensor bearings-only tracking (BOT) under the condition that the process noise follows zero-mean Gaussian distribution with unknown covariance. The novel algorithm is based on the fifth-degree cubature Kalman filter and it is constructed within the information filtering framework. With a sensor selection strategy developed using observability theory and a recursive process noise covariance estimation procedure derived using the covariance matching principle, the proposed filtering algorithm demonstrates better estimation accuracy and filtering stability. Simulation results validate the superiority of the AFCIF.
RESUMO
Homology groups are a prime tool for measuring the connectivity of a network, and their computation in a distributed and adaptive way is mandatory for their use in sensor networks. In this paper, we propose a solution based on the construction of an adaptive discrete vector field from where, thanks to the discrete Morse theory, the generators of the homology groups are extracted. The efficiency and the adaptability of our approach are tested against two applications: the detection and the localization of the holes in the coverage, and the selection of active sensors ensuring complete coverage.
RESUMO
Deployment of low-cost sensors in the field is increasingly popular. However, each sensor requires on-site calibration to increase the accuracy of the measurements. We established a laboratory method, the Average Slope Method, to select sensors with similar response so that a single, on-site calibration for one sensor can be used for all other sensors. The laboratory method was performed with aerosolized salt. Based on linear regression, we calculated slopes for 100 particulate matter (PM) sensors, and 50% of the PM sensors fell within ±14% of the average slope. We then compared our Average Slope Method with an Individual Slope Method and concluded that our first method balanced convenience and precision for our application. Laboratory selection was tested in the field, where we deployed 40 PM sensors inside a heavy-manufacturing site at spatially optimal locations and performed a field calibration to calculate a slope for three PM sensors with a reference instrument at one location. The average slope was applied to all PM sensors for mass concentration calculations. The calculated percent differences in the field were similar to the laboratory results. Therefore, we established a method that reduces the time and cost associated with calibration of low-cost sensors in the field.
RESUMO
Observation schedules depend upon the accurate understanding of a single sensor’s observation capability and the interrelated observation capability information on multiple sensors. The general ontologies for sensors and observations are abundant. However, few observation capability ontologies for satellite sensors are available, and no study has described the dynamic associations among the observation capabilities of multiple sensors used for integrated observational planning. This limitation results in a failure to realize effective sensor selection. This paper develops a sensor observation capability association (SOCA) ontology model that is resolved around the task-sensor-observation capability (TSOC) ontology pattern. The pattern is developed considering the stimulus-sensor-observation (SSO) ontology design pattern, which focuses on facilitating sensor selection for one observation task. The core aim of the SOCA ontology model is to achieve an observation capability semantic association. A prototype system called SemOCAssociation was developed, and an experiment was conducted for flood observations in the Jinsha River basin in China. The results of this experiment verified that the SOCA ontology based association method can help sensor planners intuitively and accurately make evidence-based sensor selection decisions for a given flood observation task, which facilitates efficient and effective observational planning for flood satellite sensors.